mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-25 05:14:40 +01:00
Merge remote-tracking branch 'origin/feature/PYPE-323-consolidate_integrator' into feature/PYPE-316-muster-render-support
This commit is contained in:
commit
f20f47db67
114 changed files with 2812 additions and 2051 deletions
8
.gitignore
vendored
8
.gitignore
vendored
|
|
@ -4,3 +4,11 @@
|
|||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# Documentation
|
||||
###############
|
||||
/docs/build
|
||||
|
||||
# Editor backup files #
|
||||
#######################
|
||||
*~
|
||||
|
|
|
|||
35
README.md
35
README.md
|
|
@ -1,20 +1,31 @@
|
|||
he base studio _config_ for [Avalon](https://getavalon.github.io/)
|
||||
Pype
|
||||
====
|
||||
|
||||
The base studio _config_ for [Avalon](https://getavalon.github.io/)
|
||||
|
||||
Currently this config is dependent on our customised avalon instalation so it won't work with vanilla avalon core. We're working on open sourcing all of the necessary code though. You can still get inspiration or take our individual validators and scripts which should work just fine in other pipelines.
|
||||
|
||||
_This configuration acts as a starting point for all pype club clients wth avalon deployment._
|
||||
|
||||
### Code convention
|
||||
Code convention
|
||||
---------------
|
||||
|
||||
Below are some of the standard practices applied to this repositories.
|
||||
|
||||
- **Etiquette: PEP8**
|
||||
\- All code is written in PEP8. It is recommended you use a linter as you work, flake8 and pylinter are both good options.
|
||||
- **Etiquette: Napoleon docstrings**
|
||||
\- Any docstrings are made in Google Napoleon format. See [Napoleon](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) for details.
|
||||
- **Etiquette: Semantic Versioning**
|
||||
\- This project follows [semantic versioning](http://semver.org).
|
||||
- **Etiquette: Underscore means private**
|
||||
\- Anything prefixed with an underscore means that it is internal to wherever it is used. For example, a variable name is only ever used in the parent function or class. A module is not for use by the end-user. In contrast, anything without an underscore is public, but not necessarily part of the API. Members of the API resides in `api.py`.
|
||||
- **API: Idempotence**
|
||||
\- A public function must be able to be called twice and produce the exact same result. This means no changing of state without restoring previous state when finishing. For example, if a function requires changing the current selection in Autodesk Maya, it must restore the previous selection prior to completing.
|
||||
- **Etiquette: PEP8**
|
||||
|
||||
All code is written in PEP8. It is recommended you use a linter as you work, flake8 and pylinter are both good options.
|
||||
- **Etiquette: Napoleon docstrings**
|
||||
|
||||
Any docstrings are made in Google Napoleon format. See [Napoleon](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) for details.
|
||||
|
||||
- **Etiquette: Semantic Versioning**
|
||||
|
||||
This project follows [semantic versioning](http://semver.org).
|
||||
- **Etiquette: Underscore means private**
|
||||
|
||||
Anything prefixed with an underscore means that it is internal to wherever it is used. For example, a variable name is only ever used in the parent function or class. A module is not for use by the end-user. In contrast, anything without an underscore is public, but not necessarily part of the API. Members of the API resides in `api.py`.
|
||||
|
||||
- **API: Idempotence**
|
||||
|
||||
A public function must be able to be called twice and produce the exact same result. This means no changing of state without restoring previous state when finishing. For example, if a function requires changing the current selection in Autodesk Maya, it must restore the previous selection prior to completing.
|
||||
|
|
|
|||
19
docs/Makefile
Normal file
19
docs/Makefile
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
# Minimal makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
SOURCEDIR = source
|
||||
BUILDDIR = build
|
||||
|
||||
# Put it first so that "make" without argument is like "make help".
|
||||
help:
|
||||
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
|
||||
.PHONY: help Makefile
|
||||
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
35
docs/make.bat
Normal file
35
docs/make.bat
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
@ECHO OFF
|
||||
|
||||
pushd %~dp0
|
||||
|
||||
REM Command file for Sphinx documentation
|
||||
|
||||
if "%SPHINXBUILD%" == "" (
|
||||
set SPHINXBUILD=sphinx-build
|
||||
)
|
||||
set SOURCEDIR=source
|
||||
set BUILDDIR=build
|
||||
|
||||
if "%1" == "" goto help
|
||||
|
||||
%SPHINXBUILD% >NUL 2>NUL
|
||||
if errorlevel 9009 (
|
||||
echo.
|
||||
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
||||
echo.installed, then set the SPHINXBUILD environment variable to point
|
||||
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
||||
echo.may add the Sphinx directory to PATH.
|
||||
echo.
|
||||
echo.If you don't have Sphinx installed, grab it from
|
||||
echo.http://sphinx-doc.org/
|
||||
exit /b 1
|
||||
)
|
||||
|
||||
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
|
||||
goto end
|
||||
|
||||
:help
|
||||
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
|
||||
|
||||
:end
|
||||
popd
|
||||
222
docs/source/conf.py
Normal file
222
docs/source/conf.py
Normal file
|
|
@ -0,0 +1,222 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
# This file does only contain a selection of the most common options. For a
|
||||
# full list see the documentation:
|
||||
# http://www.sphinx-doc.org/en/master/config
|
||||
|
||||
# -- Path setup --------------------------------------------------------------
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
# import os
|
||||
# import sys
|
||||
# sys.path.insert(0, os.path.abspath('.'))
|
||||
import sys
|
||||
import os
|
||||
from pprint import pprint
|
||||
from pypeapp.pypeLauncher import PypeLauncher
|
||||
from pypeapp.storage import Storage
|
||||
from pypeapp.deployment import Deployment
|
||||
|
||||
pype_setup = os.getenv('PYPE_ROOT')
|
||||
d = Deployment(pype_setup)
|
||||
launcher = PypeLauncher()
|
||||
|
||||
tools, config_path = d.get_environment_data()
|
||||
|
||||
os.environ['PYPE_CONFIG'] = config_path
|
||||
os.environ['TOOL_ENV'] = os.path.normpath(os.path.join(config_path,
|
||||
'environments'))
|
||||
launcher._add_modules()
|
||||
Storage().update_environment()
|
||||
launcher._load_default_environments(tools=tools)
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
project = 'pype'
|
||||
copyright = '2019, Orbi Tools'
|
||||
author = 'Orbi Tools'
|
||||
|
||||
# The short X.Y version
|
||||
version = ''
|
||||
# The full version, including alpha/beta/rc tags
|
||||
release = ''
|
||||
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
'sphinx.ext.autodoc',
|
||||
'sphinx.ext.napoleon',
|
||||
'sphinx.ext.intersphinx',
|
||||
'sphinx.ext.todo',
|
||||
'sphinx.ext.coverage',
|
||||
'sphinx.ext.mathjax',
|
||||
'sphinx.ext.viewcode',
|
||||
'sphinx.ext.autosummary',
|
||||
'recommonmark'
|
||||
]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
#
|
||||
# source_suffix = ['.rst', '.md']
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = None
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This pattern also affects html_static_path and html_extra_path.
|
||||
exclude_patterns = []
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'friendly'
|
||||
|
||||
# -- Options for autodoc -----------------------------------------------------
|
||||
autodoc_default_flags = ['members']
|
||||
autosummary_generate = True
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
html_theme = 'sphinx_rtd_theme'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#
|
||||
html_theme_options = {
|
||||
'collapse_navigation': False
|
||||
}
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# Custom sidebar templates, must be a dictionary that maps document names
|
||||
# to template names.
|
||||
#
|
||||
# The default sidebars (for documents that don't match any pattern) are
|
||||
# defined by theme itself. Builtin themes are using these templates by
|
||||
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
|
||||
# 'searchbox.html']``.
|
||||
#
|
||||
# html_sidebars = {}
|
||||
|
||||
|
||||
# -- Options for HTMLHelp output ---------------------------------------------
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'pypedoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output ------------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#
|
||||
# 'papersize': 'letterpaper',
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#
|
||||
# 'pointsize': '10pt',
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#
|
||||
# 'preamble': '',
|
||||
|
||||
# Latex figure (float) alignment
|
||||
#
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(master_doc, 'pype.tex', 'pype Documentation',
|
||||
'OrbiTools', 'manual'),
|
||||
]
|
||||
|
||||
|
||||
# -- Options for manual page output ------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
(master_doc, 'pype', 'pype Documentation',
|
||||
[author], 1)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Texinfo output ----------------------------------------------
|
||||
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [
|
||||
(master_doc, 'pype', 'pype Documentation',
|
||||
author, 'pype', 'One line description of project.',
|
||||
'Miscellaneous'),
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Epub output -------------------------------------------------
|
||||
|
||||
# Bibliographic Dublin Core info.
|
||||
epub_title = project
|
||||
|
||||
# The unique identifier of the text. This can be a ISBN number
|
||||
# or the project homepage.
|
||||
#
|
||||
# epub_identifier = ''
|
||||
|
||||
# A unique identification for the text.
|
||||
#
|
||||
# epub_uid = ''
|
||||
|
||||
# A list of files that should not be packed into the epub file.
|
||||
epub_exclude_files = ['search.html']
|
||||
|
||||
|
||||
# -- Extension configuration -------------------------------------------------
|
||||
|
||||
# -- Options for intersphinx extension ---------------------------------------
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'https://docs.python.org/': None}
|
||||
|
||||
# -- Options for todo extension ----------------------------------------------
|
||||
|
||||
# If true, `todo` and `todoList` produce output, else they produce nothing.
|
||||
todo_include_todos = True
|
||||
18
docs/source/index.rst
Normal file
18
docs/source/index.rst
Normal file
|
|
@ -0,0 +1,18 @@
|
|||
.. pype documentation master file, created by
|
||||
sphinx-quickstart on Mon May 13 17:18:23 2019.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to pype's documentation!
|
||||
================================
|
||||
|
||||
.. toctree::
|
||||
readme
|
||||
modules
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
7
docs/source/modules.rst
Normal file
7
docs/source/modules.rst
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
pype
|
||||
====
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 6
|
||||
|
||||
pype
|
||||
20
docs/source/pype.aport.rst
Normal file
20
docs/source/pype.aport.rst
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
pype.aport package
|
||||
==================
|
||||
|
||||
.. automodule:: pype.aport
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.aport.api module
|
||||
---------------------
|
||||
|
||||
.. automodule:: pype.aport.api
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
20
docs/source/pype.avalon_apps.rst
Normal file
20
docs/source/pype.avalon_apps.rst
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
pype.avalon\_apps package
|
||||
=========================
|
||||
|
||||
.. automodule:: pype.avalon_apps
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.avalon\_apps.avalon\_app module
|
||||
------------------------------------
|
||||
|
||||
.. automodule:: pype.avalon_apps.avalon_app
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
36
docs/source/pype.clockify.rst
Normal file
36
docs/source/pype.clockify.rst
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
pype.clockify package
|
||||
=====================
|
||||
|
||||
.. automodule:: pype.clockify
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.clockify.clockify module
|
||||
-----------------------------
|
||||
|
||||
.. automodule:: pype.clockify.clockify
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.clockify.clockify\_api module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: pype.clockify.clockify_api
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.clockify.widget\_settings module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: pype.clockify.widget_settings
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
36
docs/source/pype.ftrack.ftrack_server.rst
Normal file
36
docs/source/pype.ftrack.ftrack_server.rst
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
pype.ftrack.ftrack\_server package
|
||||
==================================
|
||||
|
||||
.. automodule:: pype.ftrack.ftrack_server
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.ftrack.ftrack\_server.event\_server module
|
||||
-----------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.ftrack_server.event_server
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.ftrack\_server.event\_server\_cli module
|
||||
----------------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.ftrack_server.event_server_cli
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.ftrack\_server.ftrack\_server module
|
||||
------------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.ftrack_server.ftrack_server
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
52
docs/source/pype.ftrack.lib.rst
Normal file
52
docs/source/pype.ftrack.lib.rst
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
pype.ftrack.lib package
|
||||
=======================
|
||||
|
||||
.. automodule:: pype.ftrack.lib
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.ftrack.lib.avalon\_sync module
|
||||
-----------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.lib.avalon_sync
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.lib.ftrack\_action\_handler module
|
||||
----------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.lib.ftrack_action_handler
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.lib.ftrack\_app\_handler module
|
||||
-------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.lib.ftrack_app_handler
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.lib.ftrack\_base\_handler module
|
||||
--------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.lib.ftrack_base_handler
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.lib.ftrack\_event\_handler module
|
||||
---------------------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.lib.ftrack_event_handler
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
52
docs/source/pype.ftrack.rst
Normal file
52
docs/source/pype.ftrack.rst
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
pype.ftrack package
|
||||
===================
|
||||
|
||||
.. automodule:: pype.ftrack
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
pype.ftrack.ftrack_server
|
||||
pype.ftrack.lib
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.ftrack.credentials module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.credentials
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.ftrack\_module module
|
||||
---------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.ftrack_module
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.login\_dialog module
|
||||
--------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.login_dialog
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.ftrack.login\_tools module
|
||||
-------------------------------
|
||||
|
||||
.. automodule:: pype.ftrack.login_tools
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
27
docs/source/pype.fusion.rst
Normal file
27
docs/source/pype.fusion.rst
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
pype.fusion package
|
||||
===================
|
||||
|
||||
.. automodule:: pype.fusion
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
pype.fusion.scripts
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.fusion.lib module
|
||||
----------------------
|
||||
|
||||
.. automodule:: pype.fusion.lib
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
28
docs/source/pype.fusion.scripts.rst
Normal file
28
docs/source/pype.fusion.scripts.rst
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
pype.fusion.scripts package
|
||||
===========================
|
||||
|
||||
.. automodule:: pype.fusion.scripts
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.fusion.scripts.fusion\_switch\_shot module
|
||||
-----------------------------------------------
|
||||
|
||||
.. automodule:: pype.fusion.scripts.fusion_switch_shot
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.fusion.scripts.publish\_filesequence module
|
||||
------------------------------------------------
|
||||
|
||||
.. automodule:: pype.fusion.scripts.publish_filesequence
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
20
docs/source/pype.houdini.rst
Normal file
20
docs/source/pype.houdini.rst
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
pype.houdini package
|
||||
====================
|
||||
|
||||
.. automodule:: pype.houdini
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.houdini.lib module
|
||||
-----------------------
|
||||
|
||||
.. automodule:: pype.houdini.lib
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
52
docs/source/pype.maya.rst
Normal file
52
docs/source/pype.maya.rst
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
pype.maya package
|
||||
=================
|
||||
|
||||
.. automodule:: pype.maya
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.maya.action module
|
||||
-----------------------
|
||||
|
||||
.. automodule:: pype.maya.action
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.maya.customize module
|
||||
--------------------------
|
||||
|
||||
.. automodule:: pype.maya.customize
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.maya.lib module
|
||||
--------------------
|
||||
|
||||
.. automodule:: pype.maya.lib
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.maya.menu module
|
||||
---------------------
|
||||
|
||||
.. automodule:: pype.maya.menu
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.maya.plugin module
|
||||
-----------------------
|
||||
|
||||
.. automodule:: pype.maya.plugin
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
44
docs/source/pype.nuke.rst
Normal file
44
docs/source/pype.nuke.rst
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
pype.nuke package
|
||||
=================
|
||||
|
||||
.. automodule:: pype.nuke
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.nuke.actions module
|
||||
------------------------
|
||||
|
||||
.. automodule:: pype.nuke.actions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.nuke.lib module
|
||||
--------------------
|
||||
|
||||
.. automodule:: pype.nuke.lib
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.nuke.menu module
|
||||
---------------------
|
||||
|
||||
.. automodule:: pype.nuke.menu
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.nuke.templates module
|
||||
--------------------------
|
||||
|
||||
.. automodule:: pype.nuke.templates
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
20
docs/source/pype.premiere.rst
Normal file
20
docs/source/pype.premiere.rst
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
pype.premiere package
|
||||
=====================
|
||||
|
||||
.. automodule:: pype.premiere
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.premiere.templates module
|
||||
------------------------------
|
||||
|
||||
.. automodule:: pype.premiere.templates
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
88
docs/source/pype.rst
Normal file
88
docs/source/pype.rst
Normal file
|
|
@ -0,0 +1,88 @@
|
|||
pype package
|
||||
============
|
||||
|
||||
.. automodule:: pype
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
pype.aport
|
||||
pype.avalon_apps
|
||||
pype.clockify
|
||||
pype.ftrack
|
||||
pype.fusion
|
||||
pype.houdini
|
||||
pype.maya
|
||||
pype.nuke
|
||||
pype.premiere
|
||||
pype.scripts
|
||||
pype.services
|
||||
pype.standalonepublish
|
||||
pype.tools
|
||||
pype.widgets
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.action module
|
||||
------------------
|
||||
|
||||
.. automodule:: pype.action
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.api module
|
||||
---------------
|
||||
|
||||
.. automodule:: pype.api
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.launcher\_actions module
|
||||
-----------------------------
|
||||
|
||||
.. automodule:: pype.launcher_actions
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.lib module
|
||||
---------------
|
||||
|
||||
.. automodule:: pype.lib
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.plugin module
|
||||
------------------
|
||||
|
||||
.. automodule:: pype.plugin
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.setdress\_api module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: pype.setdress_api
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.templates module
|
||||
---------------------
|
||||
|
||||
.. automodule:: pype.templates
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
28
docs/source/pype.scripts.rst
Normal file
28
docs/source/pype.scripts.rst
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
pype.scripts package
|
||||
====================
|
||||
|
||||
.. automodule:: pype.scripts
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.scripts.fusion\_switch\_shot module
|
||||
----------------------------------------
|
||||
|
||||
.. automodule:: pype.scripts.fusion_switch_shot
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.scripts.publish\_filesequence module
|
||||
-----------------------------------------
|
||||
|
||||
.. automodule:: pype.scripts.publish_filesequence
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
20
docs/source/pype.services.idle_manager.rst
Normal file
20
docs/source/pype.services.idle_manager.rst
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
pype.services.idle\_manager package
|
||||
===================================
|
||||
|
||||
.. automodule:: pype.services.idle_manager
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.services.idle\_manager.idle\_manager module
|
||||
------------------------------------------------
|
||||
|
||||
.. automodule:: pype.services.idle_manager.idle_manager
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
17
docs/source/pype.services.rst
Normal file
17
docs/source/pype.services.rst
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
pype.services package
|
||||
=====================
|
||||
|
||||
.. automodule:: pype.services
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
pype.services.idle_manager
|
||||
pype.services.statics_server
|
||||
pype.services.timers_manager
|
||||
|
||||
20
docs/source/pype.services.statics_server.rst
Normal file
20
docs/source/pype.services.statics_server.rst
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
pype.services.statics\_server package
|
||||
=====================================
|
||||
|
||||
.. automodule:: pype.services.statics_server
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.services.statics\_server.statics\_server module
|
||||
----------------------------------------------------
|
||||
|
||||
.. automodule:: pype.services.statics_server.statics_server
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
28
docs/source/pype.services.timers_manager.rst
Normal file
28
docs/source/pype.services.timers_manager.rst
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
pype.services.timers\_manager package
|
||||
=====================================
|
||||
|
||||
.. automodule:: pype.services.timers_manager
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.services.timers\_manager.timers\_manager module
|
||||
----------------------------------------------------
|
||||
|
||||
.. automodule:: pype.services.timers_manager.timers_manager
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.services.timers\_manager.widget\_user\_idle module
|
||||
-------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.services.timers_manager.widget_user_idle
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
8
docs/source/pype.standalonepublish.resources.rst
Normal file
8
docs/source/pype.standalonepublish.resources.rst
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
pype.standalonepublish.resources package
|
||||
========================================
|
||||
|
||||
.. automodule:: pype.standalonepublish.resources
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
44
docs/source/pype.standalonepublish.rst
Normal file
44
docs/source/pype.standalonepublish.rst
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
pype.standalonepublish package
|
||||
==============================
|
||||
|
||||
.. automodule:: pype.standalonepublish
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
pype.standalonepublish.resources
|
||||
pype.standalonepublish.widgets
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.standalonepublish.app module
|
||||
---------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.app
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.publish module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.publish
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.standalonepublish\_module module
|
||||
-------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.standalonepublish_module
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
156
docs/source/pype.standalonepublish.widgets.rst
Normal file
156
docs/source/pype.standalonepublish.widgets.rst
Normal file
|
|
@ -0,0 +1,156 @@
|
|||
pype.standalonepublish.widgets package
|
||||
======================================
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.standalonepublish.widgets.button\_from\_svgs module
|
||||
--------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.button_from_svgs
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_asset module
|
||||
--------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_asset
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_filter\_proxy\_exact\_match module
|
||||
------------------------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_filter_proxy_exact_match
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_filter\_proxy\_recursive\_sort module
|
||||
---------------------------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_filter_proxy_recursive_sort
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_node module
|
||||
-------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_node
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_tasks\_template module
|
||||
------------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_tasks_template
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_tree module
|
||||
-------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_tree
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.model\_tree\_view\_deselectable module
|
||||
---------------------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.model_tree_view_deselectable
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_asset module
|
||||
---------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_asset
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_asset\_view module
|
||||
---------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_asset_view
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_component\_item module
|
||||
-------------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_component_item
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_components module
|
||||
--------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_components
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_components\_list module
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_components_list
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_drop\_empty module
|
||||
---------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_drop_empty
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_drop\_frame module
|
||||
---------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_drop_frame
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_family module
|
||||
----------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_family
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_family\_desc module
|
||||
----------------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_family_desc
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.standalonepublish.widgets.widget\_shadow module
|
||||
----------------------------------------------------
|
||||
|
||||
.. automodule:: pype.standalonepublish.widgets.widget_shadow
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
36
docs/source/pype.tools.assetcreator.rst
Normal file
36
docs/source/pype.tools.assetcreator.rst
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
pype.tools.assetcreator package
|
||||
===============================
|
||||
|
||||
.. automodule:: pype.tools.assetcreator
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.tools.assetcreator.app module
|
||||
----------------------------------
|
||||
|
||||
.. automodule:: pype.tools.assetcreator.app
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.tools.assetcreator.model module
|
||||
------------------------------------
|
||||
|
||||
.. automodule:: pype.tools.assetcreator.model
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.tools.assetcreator.widget module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: pype.tools.assetcreator.widget
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
15
docs/source/pype.tools.rst
Normal file
15
docs/source/pype.tools.rst
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
pype.tools package
|
||||
==================
|
||||
|
||||
.. automodule:: pype.tools
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Subpackages
|
||||
-----------
|
||||
|
||||
.. toctree::
|
||||
|
||||
pype.tools.assetcreator
|
||||
|
||||
36
docs/source/pype.widgets.rst
Normal file
36
docs/source/pype.widgets.rst
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
pype.widgets package
|
||||
====================
|
||||
|
||||
.. automodule:: pype.widgets
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Submodules
|
||||
----------
|
||||
|
||||
pype.widgets.message\_window module
|
||||
-----------------------------------
|
||||
|
||||
.. automodule:: pype.widgets.message_window
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.widgets.popup module
|
||||
-------------------------
|
||||
|
||||
.. automodule:: pype.widgets.popup
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
pype.widgets.project\_settings module
|
||||
-------------------------------------
|
||||
|
||||
.. automodule:: pype.widgets.project_settings
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
|
||||
1
docs/source/readme.rst
Normal file
1
docs/source/readme.rst
Normal file
|
|
@ -0,0 +1 @@
|
|||
.. include:: ../../README.md
|
||||
46
make_docs.bat
Normal file
46
make_docs.bat
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
@echo off
|
||||
echo [92m^>^>^>[0m Generating pype-setup documentation, please wait ...
|
||||
call "C:\Users\Public\pype_env2\Scripts\activate.bat"
|
||||
|
||||
setlocal enableextensions enabledelayedexpansion
|
||||
set _OLD_PYTHONPATH=%PYTHONPATH%
|
||||
echo [92m^>^>^>[0m Adding repos path
|
||||
rem add stuff in repos
|
||||
call :ResolvePath repodir "..\"
|
||||
|
||||
for /d %%d in ( %repodir%*) do (
|
||||
echo - adding path %%d
|
||||
set PYTHONPATH=%%d;!PYTHONPATH!
|
||||
)
|
||||
|
||||
echo [92m^>^>^>[0m Adding python vendors path
|
||||
rem add python vendor paths
|
||||
call :ResolvePath vendordir "..\..\vendor\python\"
|
||||
|
||||
for /d %%d in ( %vendordir%*) do (
|
||||
echo - adding path %%d
|
||||
set PYTHONPATH=%%d;!PYTHONPATH!
|
||||
)
|
||||
|
||||
echo [92m^>^>^>[0m Setting PYPE_CONFIG
|
||||
call :ResolvePath pypeconfig "..\pype-config"
|
||||
set PYPE_CONFIG=%pypeconfig%
|
||||
echo [92m^>^>^>[0m Setting PYPE_ROOT
|
||||
call :ResolvePath pyperoot "..\..\"
|
||||
set PYPE_ROOT=%pyperoot%
|
||||
set PYTHONPATH=%PYPE_ROOT%;%PYTHONPATH%
|
||||
echo [92m^>^>^>[0m Setting PYPE_ENV
|
||||
set PYPE_ENV="C:\Users\Public\pype_env2"
|
||||
|
||||
call "docs\make.bat" clean
|
||||
sphinx-apidoc -M -f -d 6 --ext-autodoc --ext-intersphinx --ext-viewcode -o docs\source pype %PYPE_ROOT%\repos\pype\pype\vendor\*
|
||||
call "docs\make.bat" html
|
||||
echo [92m^>^>^>[0m Doing cleanup ...
|
||||
set PYTHONPATH=%_OLD_PYTHONPATH%
|
||||
set PYPE_CONFIG=
|
||||
call "C:\Users\Public\pype_env2\Scripts\deactivate.bat"
|
||||
exit /b
|
||||
|
||||
:ResolvePath
|
||||
set %1=%~dpfn2
|
||||
exit /b
|
||||
|
|
@ -43,7 +43,8 @@ from .lib import (
|
|||
get_asset_data,
|
||||
modified_environ,
|
||||
add_tool_to_environment,
|
||||
get_data_hierarchical_attr
|
||||
get_data_hierarchical_attr,
|
||||
get_avalon_project_template
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
|
|
@ -81,6 +82,7 @@ __all__ = [
|
|||
"set_hierarchy",
|
||||
"set_project_code",
|
||||
"get_data_hierarchical_attr",
|
||||
"get_avalon_project_template",
|
||||
|
||||
# preloaded templates
|
||||
"Anatomy",
|
||||
|
|
|
|||
|
|
@ -109,12 +109,6 @@ def install():
|
|||
# api.set_avalon_workdir()
|
||||
# reload_config()
|
||||
|
||||
# import sys
|
||||
|
||||
# for path in sys.path:
|
||||
# if path.startswith("C:\\Users\\Public"):
|
||||
# sys.path.remove(path)
|
||||
|
||||
log.info("Registering Nuke plug-ins..")
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ import sys
|
|||
import os
|
||||
from collections import OrderedDict
|
||||
from pprint import pprint
|
||||
from avalon.vendor.Qt import QtGui
|
||||
from avalon import api, io, lib
|
||||
import avalon.nuke
|
||||
import pype.api as pype
|
||||
|
|
@ -20,6 +19,12 @@ self = sys.modules[__name__]
|
|||
self._project = None
|
||||
|
||||
|
||||
for path in sys.path:
|
||||
log.info(os.path.normpath(path))
|
||||
if "C:\\Users\\Public" in os.path.normpath(path):
|
||||
log.info("_ removing from sys.path: `{}`".format(path))
|
||||
sys.path.remove(path)
|
||||
|
||||
def onScriptLoad():
|
||||
if nuke.env['LINUX']:
|
||||
nuke.tcl('load ffmpegReader')
|
||||
|
|
@ -472,30 +477,30 @@ def update_frame_range(start, end, root=None):
|
|||
else:
|
||||
nuke.root()[key].setValue(value)
|
||||
|
||||
|
||||
def get_additional_data(container):
|
||||
"""Get Nuke's related data for the container
|
||||
|
||||
Args:
|
||||
container(dict): the container found by the ls() function
|
||||
|
||||
Returns:
|
||||
dict
|
||||
"""
|
||||
|
||||
node = container["_tool"]
|
||||
tile_color = node['tile_color'].value()
|
||||
if tile_color is None:
|
||||
return {}
|
||||
|
||||
hex = '%08x' % tile_color
|
||||
rgba = [
|
||||
float(int(hex[0:2], 16)) / 255.0,
|
||||
float(int(hex[2:4], 16)) / 255.0,
|
||||
float(int(hex[4:6], 16)) / 255.0
|
||||
]
|
||||
|
||||
return {"color": QtGui.QColor().fromRgbF(rgba[0], rgba[1], rgba[2])}
|
||||
#
|
||||
# def get_additional_data(container):
|
||||
# """Get Nuke's related data for the container
|
||||
#
|
||||
# Args:
|
||||
# container(dict): the container found by the ls() function
|
||||
#
|
||||
# Returns:
|
||||
# dict
|
||||
# """
|
||||
#
|
||||
# node = container["_tool"]
|
||||
# tile_color = node['tile_color'].value()
|
||||
# if tile_color is None:
|
||||
# return {}
|
||||
#
|
||||
# hex = '%08x' % tile_color
|
||||
# rgba = [
|
||||
# float(int(hex[0:2], 16)) / 255.0,
|
||||
# float(int(hex[2:4], 16)) / 255.0,
|
||||
# float(int(hex[4:6], 16)) / 255.0
|
||||
# ]
|
||||
#
|
||||
# return {"color": Qt.QtGui.QColor().fromRgbF(rgba[0], rgba[1], rgba[2])}
|
||||
|
||||
|
||||
def get_write_node_template_attr(node):
|
||||
|
|
|
|||
|
|
@ -5,18 +5,13 @@ from pyblish import api as pyblish
|
|||
|
||||
from .. import api
|
||||
|
||||
from .menu import install as menu_install
|
||||
|
||||
from .lib import (
|
||||
show,
|
||||
setup,
|
||||
add_to_filemenu
|
||||
from .menu import (
|
||||
install as menu_install,
|
||||
_update_menu_task_label
|
||||
)
|
||||
|
||||
|
||||
from pypeapp import Logger
|
||||
|
||||
|
||||
log = Logger().get_logger(__name__, "nukestudio")
|
||||
|
||||
AVALON_CONFIG = os.getenv("AVALON_CONFIG", "pype")
|
||||
|
|
@ -35,41 +30,9 @@ if os.getenv("PYBLISH_GUI", None):
|
|||
pyblish.register_gui(os.getenv("PYBLISH_GUI", None))
|
||||
|
||||
|
||||
def reload_config():
|
||||
"""Attempt to reload pipeline at run-time.
|
||||
|
||||
CAUTION: This is primarily for development and debugging purposes.
|
||||
|
||||
"""
|
||||
|
||||
import importlib
|
||||
|
||||
for module in (
|
||||
"pypeapp",
|
||||
"{}.api".format(AVALON_CONFIG),
|
||||
"{}.templates".format(AVALON_CONFIG),
|
||||
"{}.nukestudio.inventory".format(AVALON_CONFIG),
|
||||
"{}.nukestudio.lib".format(AVALON_CONFIG),
|
||||
"{}.nukestudio.menu".format(AVALON_CONFIG),
|
||||
):
|
||||
log.info("Reloading module: {}...".format(module))
|
||||
try:
|
||||
module = importlib.import_module(module)
|
||||
reload(module)
|
||||
except Exception as e:
|
||||
log.warning("Cannot reload module: {}".format(e))
|
||||
importlib.reload(module)
|
||||
|
||||
|
||||
def install(config):
|
||||
|
||||
# api.set_avalon_workdir()
|
||||
# reload_config()
|
||||
|
||||
# import sys
|
||||
# for path in sys.path:
|
||||
# if path.startswith("C:\\Users\\Public"):
|
||||
# sys.path.remove(path)
|
||||
_register_events()
|
||||
|
||||
log.info("Registering NukeStudio plug-ins..")
|
||||
pyblish.register_host("nukestudio")
|
||||
|
|
@ -104,6 +67,11 @@ def uninstall():
|
|||
api.reset_data_from_templates()
|
||||
|
||||
|
||||
def _register_events():
|
||||
avalon.on("taskChanged", _update_menu_task_label)
|
||||
log.info("Installed event callback for 'taskChanged'..")
|
||||
|
||||
|
||||
def ls():
|
||||
"""List available containers.
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,13 @@ import pyblish.api
|
|||
# Host libraries
|
||||
import hiero
|
||||
|
||||
from PySide2 import (QtWidgets, QtGui)
|
||||
from pypeapp import Logger
|
||||
log = Logger().get_logger(__name__, "nukestudio")
|
||||
|
||||
import avalon.api as avalon
|
||||
import pype.api as pype
|
||||
|
||||
from avalon.vendor.Qt import (QtWidgets, QtGui)
|
||||
|
||||
|
||||
cached_process = None
|
||||
|
|
@ -19,6 +25,70 @@ self._has_been_setup = False
|
|||
self._has_menu = False
|
||||
self._registered_gui = None
|
||||
|
||||
AVALON_CONFIG = os.getenv("AVALON_CONFIG", "pype")
|
||||
|
||||
|
||||
def set_workfiles():
|
||||
''' Wrapping function for workfiles launcher '''
|
||||
from avalon.tools import workfiles
|
||||
S = avalon.Session
|
||||
active_project_root = os.path.normpath(
|
||||
os.path.join(S['AVALON_PROJECTS'], S['AVALON_PROJECT'])
|
||||
)
|
||||
workdir = os.environ["AVALON_WORKDIR"]
|
||||
workfiles.show(workdir)
|
||||
project = hiero.core.projects()[-1]
|
||||
project.setProjectRoot(active_project_root)
|
||||
|
||||
# get project data from avalon db
|
||||
project_data = pype.get_project_data()
|
||||
|
||||
# get format and fps property from avalon db on project
|
||||
width = project_data['resolution_width']
|
||||
height = project_data['resolution_height']
|
||||
pixel_aspect = project_data['pixel_aspect']
|
||||
fps = project_data['fps']
|
||||
format_name = project_data['code']
|
||||
|
||||
# create new format in hiero project
|
||||
format = hiero.core.Format(width, height, pixel_aspect, format_name)
|
||||
project.setOutputFormat(format)
|
||||
|
||||
# set fps to hiero project
|
||||
project.setFramerate(fps)
|
||||
|
||||
log.info("Project property has been synchronised with Avalon db")
|
||||
|
||||
|
||||
|
||||
|
||||
def reload_config():
|
||||
"""Attempt to reload pipeline at run-time.
|
||||
|
||||
CAUTION: This is primarily for development and debugging purposes.
|
||||
|
||||
"""
|
||||
|
||||
import importlib
|
||||
|
||||
for module in (
|
||||
"avalon.lib",
|
||||
"avalon.pipeline",
|
||||
"pypeapp",
|
||||
"{}.api".format(AVALON_CONFIG),
|
||||
"{}.templates".format(AVALON_CONFIG),
|
||||
"{}.nukestudio.lib".format(AVALON_CONFIG),
|
||||
"{}.nukestudio.menu".format(AVALON_CONFIG),
|
||||
"{}.nukestudio.tags".format(AVALON_CONFIG)
|
||||
):
|
||||
log.info("Reloading module: {}...".format(module))
|
||||
try:
|
||||
module = importlib.import_module(module)
|
||||
reload(module)
|
||||
except Exception as e:
|
||||
log.warning("Cannot reload module: {}".format(e))
|
||||
importlib.reload(module)
|
||||
|
||||
|
||||
def setup(console=False, port=None, menu=True):
|
||||
"""Setup integration
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
import os
|
||||
from avalon.api import Session
|
||||
from pprint import pprint
|
||||
|
||||
import sys
|
||||
import hiero.core
|
||||
|
||||
try:
|
||||
|
|
@ -12,27 +10,62 @@ except Exception:
|
|||
|
||||
from hiero.ui import findMenuAction
|
||||
|
||||
from avalon.api import Session
|
||||
|
||||
from .tags import add_tags_from_presets
|
||||
|
||||
from .lib import (
|
||||
reload_config,
|
||||
set_workfiles
|
||||
)
|
||||
from pypeapp import Logger
|
||||
|
||||
log = Logger().get_logger(__name__, "nukestudio")
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self._change_context_menu = None
|
||||
|
||||
|
||||
def _update_menu_task_label(*args):
|
||||
"""Update the task label in Avalon menu to current session"""
|
||||
|
||||
object_name = self._change_context_menu
|
||||
found_menu = findMenuAction(object_name)
|
||||
|
||||
if not found_menu:
|
||||
log.warning("Can't find menuItem: {}".format(object_name))
|
||||
return
|
||||
|
||||
label = "{}, {}".format(Session["AVALON_ASSET"],
|
||||
Session["AVALON_TASK"])
|
||||
|
||||
menu = found_menu.menu()
|
||||
self._change_context_menu = label
|
||||
menu.setTitle(label)
|
||||
|
||||
|
||||
#
|
||||
def install():
|
||||
# here is the best place to add menu
|
||||
from avalon.tools import (
|
||||
creator,
|
||||
publish,
|
||||
workfiles,
|
||||
cbloader,
|
||||
cbsceneinventory,
|
||||
contextmanager,
|
||||
libraryloader
|
||||
)
|
||||
|
||||
menu_name = os.environ['PYPE_STUDIO_NAME']
|
||||
menu_name = os.environ['AVALON_LABEL']
|
||||
|
||||
context_label = "{0}, {1}".format(
|
||||
Session["AVALON_ASSET"], Session["AVALON_TASK"]
|
||||
)
|
||||
|
||||
self._change_context_menu = context_label
|
||||
|
||||
# Grab Hiero's MenuBar
|
||||
M = hiero.ui.menuBar()
|
||||
|
||||
# Add a Menu to the MenuBar
|
||||
file_action = None
|
||||
|
||||
try:
|
||||
check_made_menu = findMenuAction(menu_name)
|
||||
except Exception:
|
||||
|
|
@ -43,50 +76,83 @@ def install():
|
|||
else:
|
||||
menu = check_made_menu.menu()
|
||||
|
||||
actions = [{
|
||||
'action': QAction('Set Context', None),
|
||||
'function': contextmanager.show,
|
||||
'icon': QIcon('icons:Position.png')
|
||||
},
|
||||
actions = [
|
||||
{
|
||||
'action': QAction('Create...', None),
|
||||
'function': creator.show,
|
||||
'icon': QIcon('icons:ColorAdd.png')
|
||||
},
|
||||
'parent': context_label,
|
||||
'action': QAction('Set Context', None),
|
||||
'function': contextmanager.show,
|
||||
'icon': QIcon('icons:Position.png')
|
||||
},
|
||||
"separator",
|
||||
{
|
||||
'action': QAction('Load...', None),
|
||||
'function': cbloader.show,
|
||||
'icon': QIcon('icons:CopyRectangle.png')
|
||||
},
|
||||
'action': QAction("Work Files...", None),
|
||||
'function': set_workfiles,
|
||||
'icon': QIcon('icons:Position.png')
|
||||
},
|
||||
{
|
||||
'action': QAction('Publish...', None),
|
||||
'function': publish.show,
|
||||
'icon': QIcon('icons:Output.png')
|
||||
},
|
||||
'action': QAction('Create Default Tags..', None),
|
||||
'function': add_tags_from_presets,
|
||||
'icon': QIcon('icons:Position.png')
|
||||
},
|
||||
"separator",
|
||||
{
|
||||
'action': QAction('Manage...', None),
|
||||
'function': cbsceneinventory.show,
|
||||
'icon': QIcon('icons:ModifyMetaData.png')
|
||||
},
|
||||
'action': QAction('Create...', None),
|
||||
'function': creator.show,
|
||||
'icon': QIcon('icons:ColorAdd.png')
|
||||
},
|
||||
{
|
||||
'action': QAction('Library...', None),
|
||||
'function': libraryloader.show,
|
||||
'icon': QIcon('icons:ColorAdd.png')
|
||||
}]
|
||||
'action': QAction('Load...', None),
|
||||
'function': cbloader.show,
|
||||
'icon': QIcon('icons:CopyRectangle.png')
|
||||
},
|
||||
{
|
||||
'action': QAction('Publish...', None),
|
||||
'function': publish.show,
|
||||
'icon': QIcon('icons:Output.png')
|
||||
},
|
||||
{
|
||||
'action': QAction('Manage...', None),
|
||||
'function': cbsceneinventory.show,
|
||||
'icon': QIcon('icons:ModifyMetaData.png')
|
||||
},
|
||||
{
|
||||
'action': QAction('Library...', None),
|
||||
'function': libraryloader.show,
|
||||
'icon': QIcon('icons:ColorAdd.png')
|
||||
},
|
||||
"separator",
|
||||
{
|
||||
'action': QAction('Reload pipeline...', None),
|
||||
'function': reload_config,
|
||||
'icon': QIcon('icons:ColorAdd.png')
|
||||
}]
|
||||
|
||||
|
||||
|
||||
# Create menu items
|
||||
for a in actions:
|
||||
pprint(a)
|
||||
# create action
|
||||
for k in a.keys():
|
||||
if 'action' in k:
|
||||
action = a[k]
|
||||
elif 'function' in k:
|
||||
action.triggered.connect(a[k])
|
||||
elif 'icon' in k:
|
||||
action.setIcon(a[k])
|
||||
add_to_menu = menu
|
||||
if isinstance(a, dict):
|
||||
# create action
|
||||
for k in a.keys():
|
||||
if 'parent' in k:
|
||||
submenus = [sm for sm in a[k].split('/')]
|
||||
submenu = None
|
||||
for sm in submenus:
|
||||
if submenu:
|
||||
submenu.addMenu(sm)
|
||||
else:
|
||||
submenu = menu.addMenu(sm)
|
||||
add_to_menu = submenu
|
||||
if 'action' in k:
|
||||
action = a[k]
|
||||
elif 'function' in k:
|
||||
action.triggered.connect(a[k])
|
||||
elif 'icon' in k:
|
||||
action.setIcon(a[k])
|
||||
|
||||
# add action to menu
|
||||
menu.addAction(action)
|
||||
hiero.ui.registerAction(action)
|
||||
# add action to menu
|
||||
add_to_menu.addAction(action)
|
||||
hiero.ui.registerAction(action)
|
||||
elif isinstance(a, str):
|
||||
add_to_menu.addSeparator()
|
||||
|
|
|
|||
148
pype/nukestudio/precomp_clip.py
Normal file
148
pype/nukestudio/precomp_clip.py
Normal file
|
|
@ -0,0 +1,148 @@
|
|||
from hiero.core import *
|
||||
from hiero.ui import *
|
||||
import ft_utils
|
||||
import re
|
||||
import os
|
||||
|
||||
|
||||
def create_nk_script_clips(script_lst, seq=None):
|
||||
'''
|
||||
nk_scripts is list of dictionaries like:
|
||||
[{
|
||||
'path': 'P:/Jakub_testy_pipeline/test_v01.nk',
|
||||
'name': 'test',
|
||||
'timeline_frame_in': 10,
|
||||
'handles': 10,
|
||||
'source_start': 0,
|
||||
'source_end': 54,
|
||||
'task': 'Comp-tracking',
|
||||
'work_dir': 'VFX_PR',
|
||||
'shot': '00010'
|
||||
}]
|
||||
'''
|
||||
env = ft_utils.Env()
|
||||
proj = projects()[-1]
|
||||
root = proj.clipsBin()
|
||||
|
||||
if not seq:
|
||||
seq = Sequence('NewSequences')
|
||||
root.addItem(BinItem(seq))
|
||||
# todo will ned to define this better
|
||||
# track = seq[1] # lazy example to get a destination# track
|
||||
clips_lst = []
|
||||
for nk in script_lst:
|
||||
task_short = env.task_codes[nk['task']]
|
||||
script_file = task_short
|
||||
task_path = '/'.join([nk['work_dir'], nk['shot'], nk['task']])
|
||||
bin = create_bin_in_project(task_path, proj)
|
||||
task_path += script_file
|
||||
|
||||
if nk['task'] not in seq.videoTracks():
|
||||
track = hiero.core.VideoTrack(nk['task'])
|
||||
seq.addTrack(track)
|
||||
else:
|
||||
track = seq.tracks(nk['task'])
|
||||
|
||||
# create slip media
|
||||
print nk['path']
|
||||
media = MediaSource(nk['path'])
|
||||
print media
|
||||
source = Clip(media)
|
||||
print source
|
||||
name = os.path.basename(os.path.splitext(nk['path'])[0])
|
||||
split_name = split_by_client_version(name, env)[0] or name
|
||||
print split_name
|
||||
# print source
|
||||
# add to bin as clip item
|
||||
items_in_bin = [b.name() for b in bin.items()]
|
||||
if split_name not in items_in_bin:
|
||||
binItem = BinItem(source)
|
||||
bin.addItem(binItem)
|
||||
print bin.items()
|
||||
new_source = [
|
||||
item for item in bin.items() if split_name in item.name()
|
||||
][0].items()[0].item()
|
||||
print new_source
|
||||
# add to track as clip item
|
||||
trackItem = TrackItem(split_name, TrackItem.kVideo)
|
||||
trackItem.setSource(new_source)
|
||||
trackItem.setSourceIn(nk['source_start'] + nk['handles'])
|
||||
trackItem.setSourceOut(nk['source_end'] - nk['handles'])
|
||||
trackItem.setTimelineIn(nk['source_start'] + nk['timeline_frame_in'])
|
||||
trackItem.setTimelineOut(
|
||||
(nk['source_end'] - (nk['handles'] * 2)) + nk['timeline_frame_in'])
|
||||
track.addTrackItem(trackItem)
|
||||
track.addTrackItem(trackItem)
|
||||
clips_lst.append(trackItem)
|
||||
|
||||
return clips_lst
|
||||
|
||||
|
||||
def create_bin_in_project(bin_name='', project=''):
|
||||
'''
|
||||
create bin in project and
|
||||
if the bin_name is "bin1/bin2/bin3" it will create whole depth
|
||||
'''
|
||||
|
||||
if not project:
|
||||
# get the first loaded project
|
||||
project = projects()[0]
|
||||
if not bin_name:
|
||||
return None
|
||||
if '/' in bin_name:
|
||||
bin_name = bin_name.split('/')
|
||||
else:
|
||||
bin_name = [bin_name]
|
||||
|
||||
clipsBin = project.clipsBin()
|
||||
|
||||
done_bin_lst = []
|
||||
for i, b in enumerate(bin_name):
|
||||
if i == 0 and len(bin_name) > 1:
|
||||
if b in [bin.name() for bin in clipsBin.bins()]:
|
||||
bin = [bin for bin in clipsBin.bins() if b in bin.name()][0]
|
||||
done_bin_lst.append(bin)
|
||||
else:
|
||||
create_bin = Bin(b)
|
||||
clipsBin.addItem(create_bin)
|
||||
done_bin_lst.append(create_bin)
|
||||
|
||||
elif i >= 1 and i < len(bin_name) - 1:
|
||||
if b in [bin.name() for bin in done_bin_lst[i - 1].bins()]:
|
||||
bin = [
|
||||
bin for bin in done_bin_lst[i - 1].bins()
|
||||
if b in bin.name()
|
||||
][0]
|
||||
done_bin_lst.append(bin)
|
||||
else:
|
||||
create_bin = Bin(b)
|
||||
done_bin_lst[i - 1].addItem(create_bin)
|
||||
done_bin_lst.append(create_bin)
|
||||
|
||||
elif i == len(bin_name) - 1:
|
||||
if b in [bin.name() for bin in done_bin_lst[i - 1].bins()]:
|
||||
bin = [
|
||||
bin for bin in done_bin_lst[i - 1].bins()
|
||||
if b in bin.name()
|
||||
][0]
|
||||
done_bin_lst.append(bin)
|
||||
else:
|
||||
create_bin = Bin(b)
|
||||
done_bin_lst[i - 1].addItem(create_bin)
|
||||
done_bin_lst.append(create_bin)
|
||||
# print [bin.name() for bin in clipsBin.bins()]
|
||||
return done_bin_lst[-1]
|
||||
|
||||
|
||||
def split_by_client_version(string, env=None):
|
||||
if not env:
|
||||
env = ft_utils.Env()
|
||||
|
||||
client_letter, client_digits = env.get_version_type('client')
|
||||
regex = "[/_.]" + client_letter + "\d+"
|
||||
try:
|
||||
matches = re.findall(regex, string, re.IGNORECASE)
|
||||
return string.split(matches[0])
|
||||
except Exception, e:
|
||||
print e
|
||||
return None
|
||||
131
pype/nukestudio/tags.py
Normal file
131
pype/nukestudio/tags.py
Normal file
|
|
@ -0,0 +1,131 @@
|
|||
import hiero
|
||||
import re
|
||||
from pypeapp import (
|
||||
config,
|
||||
Logger
|
||||
)
|
||||
|
||||
log = Logger().get_logger(__name__, "nukestudio")
|
||||
|
||||
_hierarchy_orig = 'hierarchy_orig'
|
||||
|
||||
|
||||
def create_tag(key, value):
|
||||
"""
|
||||
Creating Tag object.
|
||||
|
||||
Args:
|
||||
key (str): name of tag
|
||||
value (dict): parameters of tag
|
||||
|
||||
Returns:
|
||||
object: Tag object
|
||||
"""
|
||||
|
||||
tag = hiero.core.Tag(str(key))
|
||||
|
||||
return update_tag(tag, value)
|
||||
|
||||
|
||||
def update_tag(tag, value):
|
||||
"""
|
||||
Fixing Tag object.
|
||||
|
||||
Args:
|
||||
tag (obj): Tag object
|
||||
value (dict): parameters of tag
|
||||
"""
|
||||
|
||||
tag.setNote(value['note'])
|
||||
tag.setIcon(value['icon']['path'])
|
||||
mtd = tag.metadata()
|
||||
pres_mtd = value.get('metadata', None)
|
||||
if pres_mtd:
|
||||
[mtd.setValue("tag.{}".format(k), v)
|
||||
for k, v in pres_mtd.items()]
|
||||
|
||||
return tag
|
||||
|
||||
|
||||
def add_tags_from_presets():
|
||||
"""
|
||||
Will create default tags from presets.
|
||||
"""
|
||||
|
||||
# get all presets
|
||||
presets = config.get_presets()
|
||||
|
||||
# get nukestudio tag.json from presets
|
||||
nks_pres = presets['nukestudio']
|
||||
nks_pres_tags = nks_pres.get("tags", None)
|
||||
|
||||
# get project and root bin object
|
||||
project = hiero.core.projects()[-1]
|
||||
root_bin = project.tagsBin()
|
||||
|
||||
for _k, _val in nks_pres_tags.items():
|
||||
pattern = re.compile(r"\[(.*)\]")
|
||||
bin_find = pattern.findall(_k)
|
||||
if bin_find:
|
||||
# check what is in root bin
|
||||
bins = [b for b in root_bin.items()
|
||||
if b.name() in str(bin_find[0])]
|
||||
|
||||
if bins:
|
||||
bin = bins[0]
|
||||
else:
|
||||
# create Bin object
|
||||
bin = hiero.core.Bin(str(bin_find[0]))
|
||||
|
||||
for k, v in _val.items():
|
||||
tags = [t for t in bin.items()
|
||||
if str(k) in t.name()
|
||||
if len(str(k)) == len(t.name())]
|
||||
if not tags:
|
||||
# create Tag obj
|
||||
tag = create_tag(k, v)
|
||||
|
||||
# adding Tag to Bin
|
||||
bin.addItem(tag)
|
||||
else:
|
||||
update_tag(tags[0], v)
|
||||
|
||||
if not bins:
|
||||
# adding Tag to Root Bin
|
||||
root_bin.addItem(bin)
|
||||
|
||||
else:
|
||||
tags = None
|
||||
tags = [t for t in root_bin.items()
|
||||
if str(_k) in t.name()]
|
||||
|
||||
if not tags:
|
||||
# create Tag
|
||||
tag = create_tag(_k, _val)
|
||||
|
||||
# adding Tag to Root Bin
|
||||
root_bin.addItem(tag)
|
||||
else:
|
||||
# check if Hierarchy in name
|
||||
# update Tag if already exists
|
||||
tag_names = [tg.name().lower() for tg in tags]
|
||||
for _t in tags:
|
||||
if 'hierarchy' not in _t.name().lower():
|
||||
# update only non hierarchy tags
|
||||
# because hierarchy could be edited
|
||||
update_tag(_t, _val)
|
||||
elif _hierarchy_orig in _t.name().lower():
|
||||
# if hierarchy_orig already exists just
|
||||
# sync with preset
|
||||
update_tag(_t, _val)
|
||||
else:
|
||||
# if history tag already exist then create
|
||||
# backup synchronisable original Tag
|
||||
if (_hierarchy_orig not in tag_names):
|
||||
# create Tag obj
|
||||
tag = create_tag(
|
||||
_hierarchy_orig.capitalize(), _val
|
||||
)
|
||||
|
||||
# adding Tag to Bin
|
||||
root_bin.addItem(tag)
|
||||
|
|
@ -23,11 +23,19 @@ class CollectFtrackApi(pyblish.api.ContextPlugin):
|
|||
|
||||
project = os.environ.get('AVALON_PROJECT', '')
|
||||
asset = os.environ.get('AVALON_ASSET', '')
|
||||
task = os.environ.get('AVALON_TASK', '')
|
||||
task = os.environ.get('AVALON_TASK', None)
|
||||
self.log.debug(task)
|
||||
|
||||
result = session.query('Task where\
|
||||
project.full_name is "{0}" and\
|
||||
name is "{1}" and\
|
||||
parent.name is "{2}"'.format(project, task, asset)).one()
|
||||
if task:
|
||||
result = session.query('Task where\
|
||||
project.full_name is "{0}" and\
|
||||
name is "{1}" and\
|
||||
parent.name is "{2}"'.format(project, task, asset)).one()
|
||||
context.data["ftrackTask"] = result
|
||||
else:
|
||||
result = session.query('TypedContext where\
|
||||
project.full_name is "{0}" and\
|
||||
name is "{1}"'.format(project, asset)).one()
|
||||
context.data["ftrackEntity"] = result
|
||||
|
||||
context.data["ftrackTask"] = result
|
||||
self.log.info(result)
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
if sys.version_info[0] < 3:
|
||||
for key, value in data.iteritems():
|
||||
if not isinstance(value, (basestring, int)):
|
||||
self.log.info(value)
|
||||
self.log.info("value: {}".format(value))
|
||||
if "id" in value.keys():
|
||||
queries.append(
|
||||
"{0}.id is \"{1}\"".format(key, value["id"])
|
||||
|
|
@ -39,7 +39,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
for key, value in data.items():
|
||||
if not isinstance(value, (str, int)):
|
||||
self.log.info(value)
|
||||
self.log.info("value: {}".format(value))
|
||||
if "id" in value.keys():
|
||||
queries.append(
|
||||
"{0}.id is \"{1}\"".format(key, value["id"])
|
||||
|
|
@ -56,7 +56,14 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
|
||||
session = instance.context.data["ftrackSession"]
|
||||
task = instance.context.data["ftrackTask"]
|
||||
if instance.context.data.get("ftrackTask"):
|
||||
task = instance.context.data["ftrackTask"]
|
||||
name = task
|
||||
parent = task["parent"]
|
||||
elif instance.context.data.get("ftrackEntity"):
|
||||
task = None
|
||||
name = instance.context.data.get("ftrackEntity")['name']
|
||||
parent = instance.context.data.get("ftrackEntity")
|
||||
|
||||
info_msg = "Created new {entity_type} with data: {data}"
|
||||
info_msg += ", metadata: {metadata}."
|
||||
|
|
@ -68,6 +75,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
# Get existing entity.
|
||||
assettype_data = {"short": "upload"}
|
||||
assettype_data.update(data.get("assettype_data", {}))
|
||||
self.log.debug("data: {}".format(data))
|
||||
|
||||
assettype_entity = session.query(
|
||||
self.query("AssetType", assettype_data)
|
||||
|
|
@ -83,9 +91,9 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
# Asset
|
||||
# Get existing entity.
|
||||
asset_data = {
|
||||
"name": task["name"],
|
||||
"name": name,
|
||||
"type": assettype_entity,
|
||||
"parent": task["parent"],
|
||||
"parent": parent,
|
||||
}
|
||||
asset_data.update(data.get("asset_data", {}))
|
||||
|
||||
|
|
@ -93,7 +101,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
self.query("Asset", asset_data)
|
||||
).first()
|
||||
|
||||
self.log.info(asset_entity)
|
||||
self.log.info("asset entity: {}".format(asset_entity))
|
||||
|
||||
# Extracting metadata, and adding after entity creation. This is
|
||||
# due to a ftrack_api bug where you can't add metadata on creation.
|
||||
|
|
@ -120,8 +128,10 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
assetversion_data = {
|
||||
"version": 0,
|
||||
"asset": asset_entity,
|
||||
"task": task
|
||||
}
|
||||
if task:
|
||||
assetversion_data['task'] = task
|
||||
|
||||
assetversion_data.update(data.get("assetversion_data", {}))
|
||||
|
||||
assetversion_entity = session.query(
|
||||
|
|
|
|||
|
|
@ -29,60 +29,50 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
self.log.debug('instance {}'.format(instance))
|
||||
assumed_data = instance.data["assumedTemplateData"]
|
||||
assumed_version = assumed_data["version"]
|
||||
version_number = int(assumed_version)
|
||||
|
||||
if instance.data.get('version'):
|
||||
version_number = int(instance.data.get('version'))
|
||||
|
||||
family = instance.data['family'].lower()
|
||||
asset_type = ''
|
||||
|
||||
asset_type = ''
|
||||
asset_type = self.family_mapping[family]
|
||||
|
||||
componentList = []
|
||||
|
||||
dst_list = instance.data['destination_list']
|
||||
|
||||
ft_session = instance.context.data["ftrackSession"]
|
||||
|
||||
for file in instance.data['destination_list']:
|
||||
self.log.debug('file {}'.format(file))
|
||||
for comp in instance.data['representations']:
|
||||
self.log.debug('component {}'.format(comp))
|
||||
|
||||
for file in dst_list:
|
||||
filename, ext = os.path.splitext(file)
|
||||
self.log.debug('dest ext: ' + ext)
|
||||
thumbnail = False
|
||||
|
||||
if ext in ['.mov']:
|
||||
if not instance.data.get('startFrameReview'):
|
||||
instance.data['startFrameReview'] = instance.data['startFrame']
|
||||
if not instance.data.get('endFrameReview'):
|
||||
instance.data['endFrameReview'] = instance.data['endFrame']
|
||||
if comp.get('thumbnail'):
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.server"').one()
|
||||
component_data = {
|
||||
"name": "thumbnail" # Default component name is "main".
|
||||
}
|
||||
elif comp.get('preview'):
|
||||
if not comp.get('startFrameReview'):
|
||||
comp['startFrameReview'] = comp['startFrame']
|
||||
if not comp.get('endFrameReview'):
|
||||
comp['endFrameReview'] = comp['endFrame']
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.server"').one()
|
||||
component_data = {
|
||||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {'ftr_meta': json.dumps({
|
||||
'frameIn': int(instance.data['startFrameReview']),
|
||||
'frameOut': int(instance.data['startFrameReview']),
|
||||
'frameRate': 25})}
|
||||
'frameIn': int(comp['startFrameReview']),
|
||||
'frameOut': int(comp['endFrameReview']),
|
||||
'frameRate': comp['frameRate']})}
|
||||
}
|
||||
elif ext in [".jpg", ".jpeg"]:
|
||||
component_data = {
|
||||
"name": "thumbnail" # Default component name is "main".
|
||||
}
|
||||
thumbnail = True
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.server"').one()
|
||||
comp['thumbnail'] = False
|
||||
else:
|
||||
component_data = {
|
||||
"name": ext[1:] # Default component name is "main".
|
||||
"name": comp['name']
|
||||
}
|
||||
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.unmanaged"').one()
|
||||
comp['thumbnail'] = False
|
||||
|
||||
self.log.debug('location {}'.format(location))
|
||||
|
||||
|
|
@ -96,10 +86,10 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
"version": version_number,
|
||||
},
|
||||
"component_data": component_data,
|
||||
"component_path": file,
|
||||
"component_path": comp['published_path'],
|
||||
'component_location': location,
|
||||
"component_overwrite": False,
|
||||
"thumbnail": thumbnail
|
||||
"thumbnail": comp['thumbnail']
|
||||
}
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,85 +0,0 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import clique
|
||||
import json
|
||||
|
||||
|
||||
class IntegrateFtrackReview(pyblish.api.InstancePlugin):
|
||||
"""Collect ftrack component data
|
||||
|
||||
Add ftrack component list to instance.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.48
|
||||
label = 'Integrate Ftrack Review'
|
||||
families = ['review', 'ftrack']
|
||||
|
||||
family_mapping = {'review': 'mov'
|
||||
}
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
self.log.debug('instance {}'.format(instance))
|
||||
#
|
||||
# assumed_data = instance.data["assumedTemplateData"]
|
||||
# assumed_version = assumed_data["version"]
|
||||
# version_number = int(assumed_version)
|
||||
# family = instance.data['family'].lower()
|
||||
# asset_type = ''
|
||||
#
|
||||
# asset_type = self.family_mapping[family]
|
||||
#
|
||||
# componentList = []
|
||||
#
|
||||
# dst_list = instance.data['destination_list']
|
||||
#
|
||||
# ft_session = instance.context.data["ftrackSession"]
|
||||
#
|
||||
#
|
||||
# for file in instance.data['destination_list']:
|
||||
# self.log.debug('file {}'.format(file))
|
||||
#
|
||||
# for file in dst_list:
|
||||
# filename, ext = os.path.splitext(file)
|
||||
# self.log.debug('dest ext: ' + ext)
|
||||
#
|
||||
# if ext == '.mov':
|
||||
# component_name = "ftrackreview-mp4"
|
||||
# metadata = {'ftr_meta': json.dumps({
|
||||
# 'frameIn': int(instance.data["startFrame"]),
|
||||
# 'frameOut': int(instance.data["startFrame"]),
|
||||
# 'frameRate': 25})}
|
||||
# thumbnail = False
|
||||
#
|
||||
# else:
|
||||
# component_name = "thumbnail"
|
||||
# thumbnail = True
|
||||
#
|
||||
# location = ft_session.query(
|
||||
# 'Location where name is "ftrack.server"').one()
|
||||
#
|
||||
# componentList.append({"assettype_data": {
|
||||
# "short": asset_type,
|
||||
# },
|
||||
# "asset_data": {
|
||||
# "name": instance.data["subset"],
|
||||
# },
|
||||
# "assetversion_data": {
|
||||
# "version": version_number,
|
||||
# },
|
||||
# "component_data": {
|
||||
# "name": component_name, # Default component name is "main".
|
||||
# "metadata": metadata
|
||||
# },
|
||||
# "component_path": file,
|
||||
# 'component_location': location,
|
||||
# "component_overwrite": False,
|
||||
# "thumbnail": thumbnail
|
||||
# }
|
||||
# )
|
||||
#
|
||||
#
|
||||
# self.log.debug('componentsList: {}'.format(str(componentList)))
|
||||
# instance.data["ftrackComponentsList"] = componentList
|
||||
|
|
@ -1,14 +1,10 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from avalon import (
|
||||
io,
|
||||
api as avalon
|
||||
)
|
||||
from avalon import io
|
||||
import json
|
||||
import logging
|
||||
import clique
|
||||
|
||||
|
||||
log = logging.getLogger("collector")
|
||||
|
||||
|
||||
|
|
@ -16,16 +12,11 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
"""
|
||||
Collecting temp json data sent from a host context
|
||||
and path for returning json data back to hostself.
|
||||
|
||||
Setting avalon session into correct context
|
||||
|
||||
Args:
|
||||
context (obj): pyblish context session
|
||||
|
||||
"""
|
||||
|
||||
label = "Collect Context - SA Publish"
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
hosts = ["shell"]
|
||||
|
||||
def process(self, context):
|
||||
# get json paths from os and load them
|
||||
|
|
@ -33,13 +24,12 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
input_json_path = os.environ.get("SAPUBLISH_INPATH")
|
||||
output_json_path = os.environ.get("SAPUBLISH_OUTPATH")
|
||||
|
||||
context.data["stagingDir"] = os.path.dirname(input_json_path)
|
||||
# context.data["stagingDir"] = os.path.dirname(input_json_path)
|
||||
context.data["returnJsonPath"] = output_json_path
|
||||
|
||||
with open(input_json_path, "r") as f:
|
||||
in_data = json.load(f)
|
||||
|
||||
project_name = in_data['project']
|
||||
asset_name = in_data['asset']
|
||||
family = in_data['family']
|
||||
subset = in_data['subset']
|
||||
|
|
@ -63,25 +53,24 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
"families": [family, 'ftrack'],
|
||||
})
|
||||
self.log.info("collected instance: {}".format(instance.data))
|
||||
self.log.info("parsing data: {}".format(in_data))
|
||||
|
||||
instance.data["files"] = list()
|
||||
instance.data['destination_list'] = list()
|
||||
instance.data['representations'] = list()
|
||||
instance.data['source'] = 'standalone publisher'
|
||||
|
||||
for component in in_data['representations']:
|
||||
# instance.add(node)
|
||||
component['destination'] = component['files']
|
||||
collections, remainder = clique.assemble(component['files'])
|
||||
if collections:
|
||||
self.log.debug(collections)
|
||||
instance.data['startFrame'] = component['startFrame']
|
||||
instance.data['endFrame'] = component['endFrame']
|
||||
instance.data['frameRate'] = component['frameRate']
|
||||
|
||||
instance.data["files"].append(component)
|
||||
component['destination'] = component['files']
|
||||
component['stagingDir'] = component['stagingDir']
|
||||
component['anatomy_template'] = 'render'
|
||||
if isinstance(component['files'], list):
|
||||
collections, remainder = clique.assemble(component['files'])
|
||||
self.log.debug("collecting sequence: {}".format(collections))
|
||||
instance.data['startFrame'] = int(component['startFrame'])
|
||||
instance.data['endFrame'] = int(component['endFrame'])
|
||||
instance.data['frameRate'] = int(component['frameRate'])
|
||||
|
||||
instance.data["representations"].append(component)
|
||||
|
||||
# "is_thumbnail": component['thumbnail'],
|
||||
# "is_preview": component['preview']
|
||||
|
||||
self.log.info(in_data)
|
||||
|
|
@ -178,26 +178,28 @@ class CollectFileSequences(pyblish.api.ContextPlugin):
|
|||
# root = os.path.normpath(root)
|
||||
# self.log.info("Source: {}}".format(data.get("source", "")))
|
||||
|
||||
try:
|
||||
instance.data.update({
|
||||
"name": str(collection),
|
||||
"family": families[0],
|
||||
"families": list(families),
|
||||
"subset": subset,
|
||||
"asset": data.get(
|
||||
"asset", api.Session.get("AVALON_ASSET")),
|
||||
"stagingDir": root,
|
||||
"files": [list(collection)],
|
||||
"startFrame": start,
|
||||
"endFrame": end,
|
||||
"fps": fps,
|
||||
"source": data.get('source', '')
|
||||
})
|
||||
except Exception as e:
|
||||
print(e)
|
||||
raise
|
||||
instance.data.update({
|
||||
"name": str(collection),
|
||||
"family": families[0], # backwards compatibility / pyblish
|
||||
"families": list(families),
|
||||
"subset": subset,
|
||||
"asset": data.get("asset", api.Session["AVALON_ASSET"]),
|
||||
"stagingDir": root,
|
||||
"startFrame": start,
|
||||
"endFrame": end,
|
||||
"fps": fps,
|
||||
"source": data.get('source', '')
|
||||
})
|
||||
instance.append(collection)
|
||||
|
||||
representation = {
|
||||
'name': 'jpg',
|
||||
'ext': '.jpg',
|
||||
'files': [list(collection)],
|
||||
"stagingDir": root,
|
||||
}
|
||||
instance.data["representations"] = [representation]
|
||||
|
||||
if data.get('user'):
|
||||
context.data["user"] = data['user']
|
||||
|
||||
|
|
|
|||
|
|
@ -62,6 +62,13 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
sub_proc = subprocess.Popen(subprocess_jpeg)
|
||||
sub_proc.wait()
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
instance.data["files"].append(jpegFile)
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'jpg',
|
||||
'ext': '.jpg',
|
||||
'files': jpegFile,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -70,6 +70,13 @@ class ExtractQuicktimeEXR(pyblish.api.InstancePlugin):
|
|||
sub_proc = subprocess.Popen(subprocess_mov)
|
||||
sub_proc.wait()
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
instance.data["files"].append(movFile)
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'mov',
|
||||
'ext': '.mov',
|
||||
'files': movFile,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import os
|
||||
import logging
|
||||
import shutil
|
||||
import clique
|
||||
|
||||
import errno
|
||||
import pyblish.api
|
||||
|
|
@ -24,25 +25,10 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Integrate Asset"
|
||||
order = pyblish.api.IntegratorOrder
|
||||
families = ["animation",
|
||||
"camera",
|
||||
"look",
|
||||
"mayaAscii",
|
||||
"model",
|
||||
"pointcache",
|
||||
"vdbcache",
|
||||
"setdress",
|
||||
families = ["look",
|
||||
"assembly",
|
||||
"layout",
|
||||
"rig",
|
||||
"vrayproxy",
|
||||
"yetiRig",
|
||||
"yeticache",
|
||||
"nukescript",
|
||||
"review",
|
||||
"workfile",
|
||||
"scene",
|
||||
"ass"]
|
||||
"yeticache"]
|
||||
exclude_families = ["clip"]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -53,7 +39,9 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
self.register(instance)
|
||||
|
||||
self.log.info("Integrating Asset in to the database ...")
|
||||
self.integrate(instance)
|
||||
if instance.data.get('transfer', True):
|
||||
self.integrate(instance)
|
||||
|
||||
|
||||
def register(self, instance):
|
||||
# Required environment variables
|
||||
|
|
|
|||
|
|
@ -1,19 +1,18 @@
|
|||
import os
|
||||
import logging
|
||||
import shutil
|
||||
import clique
|
||||
|
||||
import errno
|
||||
import pyblish.api
|
||||
from avalon import api, io
|
||||
from avalon.vendor import filelink
|
||||
import clique
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class IntegrateAsset(pyblish.api.InstancePlugin):
|
||||
"""Resolve any dependency issies
|
||||
class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
||||
"""Resolve any dependency issius
|
||||
|
||||
This plug-in resolves any paths which, if not updated might break
|
||||
the published file.
|
||||
|
|
@ -21,29 +20,48 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
The order of families is important, when working with lookdev you want to
|
||||
first publish the texture, update the texture paths in the nodes and then
|
||||
publish the shading network. Same goes for file dependent assets.
|
||||
|
||||
Requirements for instance to be correctly integrated
|
||||
|
||||
instance.data['representations'] - must be a list and each member
|
||||
must be a dictionary with following data:
|
||||
'files': list of filenames for sequence, string for single file.
|
||||
Only the filename is allowed, without the folder path.
|
||||
'stagingDir': "path/to/folder/with/files"
|
||||
'name': representation name (usually the same as extension)
|
||||
'ext': file extension
|
||||
optional data
|
||||
'anatomy_template': 'publish' or 'render', etc.
|
||||
template from anatomy that should be used for
|
||||
integrating this file. Only the first level can
|
||||
be specified right now.
|
||||
'startFrame'
|
||||
'endFrame'
|
||||
'framerate'
|
||||
"""
|
||||
|
||||
label = "Integrate Asset"
|
||||
label = "Integrate Asset New"
|
||||
order = pyblish.api.IntegratorOrder
|
||||
families = ["animation",
|
||||
"camera",
|
||||
"look",
|
||||
"mayaAscii",
|
||||
"model",
|
||||
families = ["workfile",
|
||||
"pointcache",
|
||||
"vdbcache",
|
||||
"camera",
|
||||
"animation",
|
||||
"model",
|
||||
"mayaAscii",
|
||||
"setdress",
|
||||
"assembly",
|
||||
"layout",
|
||||
"rig",
|
||||
"vrayproxy",
|
||||
"yetiRig",
|
||||
"yeticache",
|
||||
"nukescript",
|
||||
# "review",
|
||||
"workfile",
|
||||
"ass",
|
||||
"vdbcache",
|
||||
"scene",
|
||||
"ass"]
|
||||
"vrayproxy",
|
||||
"render",
|
||||
"imagesequence",
|
||||
"review",
|
||||
"nukescript",
|
||||
"render",
|
||||
"write",
|
||||
"plates"
|
||||
]
|
||||
exclude_families = ["clip"]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -55,12 +73,15 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
self.register(instance)
|
||||
|
||||
self.log.info("Integrating Asset in to the database ...")
|
||||
self.integrate(instance)
|
||||
self.log.info("instance.data: {}".format(instance.data))
|
||||
if instance.data.get('transfer', True):
|
||||
self.integrate(instance)
|
||||
|
||||
def register(self, instance):
|
||||
# Required environment variables
|
||||
PROJECT = api.Session["AVALON_PROJECT"]
|
||||
ASSET = instance.data.get("asset") or api.Session["AVALON_ASSET"]
|
||||
TASK = instance.data.get("task") or api.Session["AVALON_TASK"]
|
||||
LOCATION = api.Session["AVALON_LOCATION"]
|
||||
|
||||
context = instance.context
|
||||
|
|
@ -86,19 +107,21 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# ^
|
||||
# |
|
||||
#
|
||||
# stagingdir = instance.data.get("stagingDir")
|
||||
# assert stagingdir, ("Incomplete instance \"%s\": "
|
||||
# "Missing reference to staging area." % instance)
|
||||
stagingdir = instance.data.get("stagingDir")
|
||||
if not stagingdir:
|
||||
self.log.info('''{} is missing reference to staging
|
||||
directory Will try to get it from
|
||||
representation'''.format(instance))
|
||||
|
||||
# extra check if stagingDir actually exists and is available
|
||||
|
||||
# self.log.debug("Establishing staging directory @ %s" % stagingdir)
|
||||
self.log.debug("Establishing staging directory @ %s" % stagingdir)
|
||||
|
||||
# Ensure at least one file is set up for transfer in staging dir.
|
||||
files = instance.data.get("files", [])
|
||||
assert files, "Instance has no files to transfer"
|
||||
assert isinstance(files, (list, tuple)), (
|
||||
"Instance 'files' must be a list, got: {0}".format(files)
|
||||
repres = instance.data.get("representations", None)
|
||||
assert repres, "Instance has no files to transfer"
|
||||
assert isinstance(repres, (list, tuple)), (
|
||||
"Instance 'files' must be a list, got: {0}".format(repres)
|
||||
)
|
||||
|
||||
project = io.find_one({"type": "project"})
|
||||
|
|
@ -122,7 +145,10 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
if latest_version is not None:
|
||||
next_version += latest_version["name"]
|
||||
|
||||
self.log.info("Verifying version from assumed destination")
|
||||
if instance.data.get('version'):
|
||||
next_version = int(instance.data.get('version'))
|
||||
|
||||
# self.log.info("Verifying version from assumed destination")
|
||||
|
||||
# assumed_data = instance.data["assumedTemplateData"]
|
||||
# assumed_version = assumed_data["version"]
|
||||
|
|
@ -143,6 +169,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
self.log.debug("Creating version ...")
|
||||
version_id = io.insert_one(version).inserted_id
|
||||
instance.data['version'] = version['name']
|
||||
|
||||
# Write to disk
|
||||
# _
|
||||
# | |
|
||||
|
|
@ -167,19 +194,20 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"project": {"name": PROJECT,
|
||||
"code": project['data']['code']},
|
||||
"silo": asset['silo'],
|
||||
"task": TASK,
|
||||
"asset": ASSET,
|
||||
"family": instance.data['family'],
|
||||
"subset": subset["name"],
|
||||
"version": int(version["name"]),
|
||||
"hierarchy": hierarchy}
|
||||
|
||||
template_publish = project["config"]["template"]["publish"]
|
||||
anatomy = instance.context.data['anatomy']
|
||||
|
||||
# Find the representations to transfer amongst the files
|
||||
# Each should be a single representation (as such, a single extension)
|
||||
representations = []
|
||||
destination_list = []
|
||||
template_name = 'publish'
|
||||
if 'transfers' not in instance.data:
|
||||
instance.data['transfers'] = []
|
||||
|
||||
|
|
@ -196,10 +224,17 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
#
|
||||
|
||||
files = repre['files']
|
||||
if repre.get('stagingDir'):
|
||||
stagingdir = repre['stagingDir']
|
||||
if repre.get('anatomy_template'):
|
||||
template_name = repre['anatomy_template']
|
||||
template = os.path.normpath(
|
||||
anatomy.templates[template_name]["path"])
|
||||
|
||||
if len(files) > 1:
|
||||
if isinstance(files, list):
|
||||
src_collections, remainder = clique.assemble(files)
|
||||
self.log.debug("dst_collections: {}".format(str(src_collections)))
|
||||
self.log.debug(
|
||||
"src_tail_collections: {}".format(str(src_collections)))
|
||||
src_collection = src_collections[0]
|
||||
# Assert that each member has identical suffix
|
||||
src_head = src_collection.format("{head}")
|
||||
|
|
@ -207,11 +242,15 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
|
||||
test_dest_files = list()
|
||||
for i in [1, 2]:
|
||||
template_data["representation"] = src_tail[1:]
|
||||
template_data["representation"] = repre['ext']
|
||||
template_data["frame"] = src_collection.format(
|
||||
"{padding}") % i
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
test_dest_files.append(anatomy_filled["publish"]["path"])
|
||||
|
||||
test_dest_files.append(
|
||||
os.path.normpath(
|
||||
anatomy_filled[template_name]["path"])
|
||||
)
|
||||
|
||||
dst_collections, remainder = clique.assemble(test_dest_files)
|
||||
dst_collection = dst_collections[0]
|
||||
|
|
@ -224,13 +263,12 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
src_padding = src_collection.format("{padding}") % i
|
||||
src_file_name = "{0}{1}{2}".format(
|
||||
src_head, src_padding, src_tail)
|
||||
|
||||
dst_padding = dst_collection.format("{padding}") % i
|
||||
dst = "{0}{1}{2}".format(dst_head, dst_padding, dst_tail)
|
||||
|
||||
# src = os.path.join(stagingdir, src_file_name)
|
||||
src = src_file_name
|
||||
self.log.debug("source: {}".format(src))
|
||||
|
||||
self.log.debug("destination: `{}`".format(dst))
|
||||
src = os.path.join(stagingdir, src_file_name)
|
||||
self.log.debug("source: `{}`".format(src))
|
||||
instance.data["transfers"].append([src, dst])
|
||||
|
||||
else:
|
||||
|
|
@ -242,28 +280,27 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# | |
|
||||
# |_______|
|
||||
#
|
||||
fname = files[0]
|
||||
# assert not os.path.isabs(fname), (
|
||||
# "Given file name is a full path"
|
||||
# )
|
||||
# _, ext = os.path.splitext(fname)
|
||||
template_data.pop("frame", None)
|
||||
fname = files
|
||||
assert not os.path.isabs(fname), (
|
||||
"Given file name is a full path"
|
||||
)
|
||||
_, ext = os.path.splitext(fname)
|
||||
|
||||
template_data["representation"] = repre['representation']
|
||||
template_data["representation"] = repre['ext']
|
||||
|
||||
# src = os.path.join(stagingdir, fname)
|
||||
src = fname
|
||||
src = os.path.join(stagingdir, fname)
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
dst = anatomy_filled["publish"]["path"]
|
||||
dst = anatomy_filled[template_name]["path"]
|
||||
|
||||
instance.data["transfers"].append([src, dst])
|
||||
template = anatomy.templates["publish"]["path"]
|
||||
instance.data["representations"][idx]['published_path'] = dst
|
||||
|
||||
representation = {
|
||||
"schema": "pype:representation-2.0",
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": repre['representation'],
|
||||
"name": repre['name'],
|
||||
"data": {'path': dst, 'template': template},
|
||||
"dependencies": instance.data.get("dependencies", "").split(),
|
||||
|
||||
|
|
@ -273,14 +310,14 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"root": root,
|
||||
"project": {"name": PROJECT,
|
||||
"code": project['data']['code']},
|
||||
# 'task': api.Session["AVALON_TASK"],
|
||||
'task': TASK,
|
||||
"silo": asset['silo'],
|
||||
"asset": ASSET,
|
||||
"family": instance.data['family'],
|
||||
"subset": subset["name"],
|
||||
"version": version["name"],
|
||||
"hierarchy": hierarchy,
|
||||
# "representation": repre['representation']
|
||||
"representation": repre['ext']
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -417,18 +454,15 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
families += current_families
|
||||
|
||||
self.log.debug("Registered root: {}".format(api.registered_root()))
|
||||
# # create relative source path for DB
|
||||
# try:
|
||||
# source = instance.data['source']
|
||||
# except KeyError:
|
||||
# source = context.data["currentFile"]
|
||||
#
|
||||
# relative_path = os.path.relpath(source, api.registered_root())
|
||||
# source = os.path.join("{root}", relative_path).replace("\\", "/")
|
||||
# create relative source path for DB
|
||||
try:
|
||||
source = instance.data['source']
|
||||
except KeyError:
|
||||
source = context.data["currentFile"]
|
||||
relative_path = os.path.relpath(source, api.registered_root())
|
||||
source = os.path.join("{root}", relative_path).replace("\\", "/")
|
||||
|
||||
source = "standalone"
|
||||
|
||||
# self.log.debug("Source: {}".format(source))
|
||||
self.log.debug("Source: {}".format(source))
|
||||
version_data = {"families": families,
|
||||
"time": context.data["time"],
|
||||
"author": context.data["user"],
|
||||
|
|
@ -24,7 +24,7 @@ class IntegrateFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Integrate Frames"
|
||||
order = pyblish.api.IntegratorOrder
|
||||
families = ["imagesequence", "render", "write", "source"]
|
||||
families = ["imagesequence", "source"]
|
||||
|
||||
family_targets = [".frames", ".local", ".review", "imagesequence", "render", "source"]
|
||||
exclude_families = ["clip"]
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class ExtractAlembic(pype.api.Extractor):
|
|||
# Get the filename from the filename parameter
|
||||
output = ropnode.evalParm("filename")
|
||||
staging_dir = os.path.dirname(output)
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
# instance.data["stagingDir"] = staging_dir
|
||||
|
||||
file_name = os.path.basename(output)
|
||||
|
||||
|
|
@ -37,7 +37,13 @@ class ExtractAlembic(pype.api.Extractor):
|
|||
traceback.print_exc()
|
||||
raise RuntimeError("Render failed: {0}".format(exc))
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = []
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(file_name)
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': '.abc',
|
||||
'files': file_name,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -35,9 +35,15 @@ class ExtractVDBCache(pype.api.Extractor):
|
|||
traceback.print_exc()
|
||||
raise RuntimeError("Render failed: {0}".format(exc))
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = []
|
||||
|
||||
output = instance.data["frames"]
|
||||
|
||||
instance.data["files"].append(output)
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'mov',
|
||||
'ext': '.mov',
|
||||
'files': output,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -37,16 +37,19 @@ class CollectMayaScene(pyblish.api.ContextPlugin):
|
|||
"label": subset,
|
||||
"publish": False,
|
||||
"family": 'workfile',
|
||||
"representation": "ma",
|
||||
"setMembers": [current_file],
|
||||
"stagingDir": folder
|
||||
"setMembers": [current_file]
|
||||
})
|
||||
|
||||
data['files'] = [file]
|
||||
data['representations'] = [{
|
||||
'name': 'ma',
|
||||
'ext': '.ma',
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
instance.data.update(data)
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('Scene path: {}'.format(current_file))
|
||||
self.log.info('stagin Dir: {}'.format(folder))
|
||||
self.log.info('subset: {}'.format(filename))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
self.log.info('subset: {}'.format(subset))
|
||||
|
|
|
|||
|
|
@ -77,9 +77,15 @@ class ExtractAnimation(pype.api.Extractor):
|
|||
endFrame=end,
|
||||
**options)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': '.abc',
|
||||
'files': filename,
|
||||
"stagingDir": dirname,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
|
|
|
|||
|
|
@ -21,8 +21,8 @@ class ExtractAssStandin(pype.api.Extractor):
|
|||
def process(self, instance):
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
file_name = "{}.ass".format(instance.name)
|
||||
file_path = os.path.join(staging_dir, file_name)
|
||||
filename = "{}.ass".format(instance.name)
|
||||
file_path = os.path.join(staging_dir, filename)
|
||||
|
||||
# Write out .ass file
|
||||
self.log.info("Writing: '%s'" % file_path)
|
||||
|
|
@ -37,11 +37,16 @@ class ExtractAssStandin(pype.api.Extractor):
|
|||
boundingBox=True
|
||||
)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
|
||||
instance.data["files"].append(file_name)
|
||||
representation = {
|
||||
'name': 'ass',
|
||||
'ext': '.ass',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
|
|
|
|||
|
|
@ -33,7 +33,6 @@ class ExtractAssProxy(pype.api.Extractor):
|
|||
else:
|
||||
yield
|
||||
|
||||
|
||||
# Define extract output file path
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{0}.ma".format(instance.name)
|
||||
|
|
@ -64,10 +63,15 @@ class ExtractAssProxy(pype.api.Extractor):
|
|||
expressions=False,
|
||||
constructionHistory=False)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'ma',
|
||||
'ext': '.ma',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
|
|
|
|||
|
|
@ -70,10 +70,16 @@ class ExtractCameraAlembic(pype.api.Extractor):
|
|||
with avalon.maya.suspended_refresh():
|
||||
cmds.AbcExport(j=job_str, verbose=False)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': '.abc',
|
||||
'files': filename,
|
||||
"stagingDir": dir_path,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -168,10 +168,16 @@ class ExtractCameraMayaAscii(pype.api.Extractor):
|
|||
|
||||
massage_ma_file(path)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'ma',
|
||||
'ext': '.ma',
|
||||
'files': filename,
|
||||
"stagingDir": dir_path,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -146,9 +146,9 @@ class ExtractFBX(pype.api.Extractor):
|
|||
cmds.loadPlugin("fbxmaya", quiet=True)
|
||||
|
||||
# Define output path
|
||||
directory = self.staging_dir(instance)
|
||||
stagingDir = self.staging_dir(instance)
|
||||
filename = "{0}.fbx".format(instance.name)
|
||||
path = os.path.join(directory, filename)
|
||||
path = os.path.join(stagingDir, filename)
|
||||
|
||||
# The export requires forward slashes because we need
|
||||
# to format it into a string in a mel expression
|
||||
|
|
@ -208,9 +208,16 @@ class ExtractFBX(pype.api.Extractor):
|
|||
cmds.select(members, r=1, noExpand=True)
|
||||
mel.eval('FBXExport -f "{}" -s'.format(path))
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'mov',
|
||||
'ext': '.mov',
|
||||
'files': filename,
|
||||
"stagingDir": stagingDir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
|
||||
self.log.info("Extract FBX successful to: {0}".format(path))
|
||||
|
|
|
|||
|
|
@ -51,9 +51,15 @@ class ExtractMayaAsciiRaw(pype.api.Extractor):
|
|||
constraints=True,
|
||||
expressions=True)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'ma',
|
||||
'ext': '.ma',
|
||||
'files': filename,
|
||||
"stagingDir": dir_path
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
|
|
|
|||
|
|
@ -69,9 +69,15 @@ class ExtractModel(pype.api.Extractor):
|
|||
|
||||
# Store reference for integration
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'ma',
|
||||
'ext': '.ma',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
|
|
|
|||
|
|
@ -79,9 +79,15 @@ class ExtractAlembic(pype.api.Extractor):
|
|||
endFrame=end,
|
||||
**options)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': '.abc',
|
||||
'files': filename,
|
||||
"stagingDir": dirname
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
|
|
|
|||
|
|
@ -123,9 +123,21 @@ class ExtractQuicktime(pype.api.Extractor):
|
|||
self.log.error(ffmpeg_error)
|
||||
raise RuntimeError(ffmpeg_error)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
instance.data["files"].append(movieFile)
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'mov',
|
||||
'ext': '.mov',
|
||||
'files': movieFile,
|
||||
"stagingDir": stagingdir,
|
||||
'startFrame': start,
|
||||
'endFrame': end,
|
||||
'frameRate': fps,
|
||||
'preview': True
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
|
|||
|
|
@ -34,9 +34,16 @@ class ExtractRig(pype.api.Extractor):
|
|||
expressions=True,
|
||||
constructionHistory=True)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'ma',
|
||||
'ext': '.ma',
|
||||
'files': filename,
|
||||
"stagingDir": dir_path
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
|
|
|
|||
|
|
@ -110,9 +110,9 @@ class ExtractThumbnail(pype.api.Extractor):
|
|||
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
|
||||
}
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
stagingDir = self.staging_dir(instance)
|
||||
filename = "{0}".format(instance.name)
|
||||
path = os.path.join(stagingdir, filename)
|
||||
path = os.path.join(stagingDir, filename)
|
||||
|
||||
self.log.info("Outputting images to %s" % path)
|
||||
|
||||
|
|
@ -131,51 +131,19 @@ class ExtractThumbnail(pype.api.Extractor):
|
|||
_, thumbnail = os.path.split(playblast)
|
||||
|
||||
self.log.info("file list {}".format(thumbnail))
|
||||
# self.log.info("Calculating HUD data overlay")
|
||||
|
||||
# stagingdir = "C:/Users/milan.kolar/AppData/Local/Temp/pyblish_tmp_ucsymm"
|
||||
# collected_frames = os.listdir(stagingdir)
|
||||
# collections, remainder = clique.assemble(collected_frames)
|
||||
# input_path = os.path.join(stagingdir, collections[0].format('{head}{padding}{tail}'))
|
||||
# self.log.info("input {}".format(input_path))
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
# movieFile = filename + ".mov"
|
||||
# full_movie_path = os.path.join(stagingdir, movieFile)
|
||||
# self.log.info("output {}".format(full_movie_path))
|
||||
# fls = [os.path.join(stagingdir, filename).replace("\\","/") for f in os.listdir( dir_path ) if f.endswith(preset['compression'])]
|
||||
# self.log.info("file list {}}".format(fls[0]))
|
||||
representation = {
|
||||
'name': 'thumbnail',
|
||||
'ext': '.jpg',
|
||||
'files': thumbnail,
|
||||
"stagingDir": stagingDir,
|
||||
"thumbnail": True
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
# out, err = (
|
||||
# ffmpeg
|
||||
# .input(input_path, framerate=25)
|
||||
# .output(full_movie_path)
|
||||
# .run(overwrite_output=True)
|
||||
# )
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
instance.data["files"].append(thumbnail)
|
||||
|
||||
# ftrackStrings = fStrings.annotationData()
|
||||
# nData = ftrackStrings.niceData
|
||||
# nData['version'] = instance.context.data('version')
|
||||
# fFrame = int(pm.playbackOptions( q = True, minTime = True))
|
||||
# eFrame = int(pm.playbackOptions( q = True, maxTime = True))
|
||||
# nData['frame'] = [(str("{0:05d}".format(f))) for f in range(fFrame, eFrame + 1)]
|
||||
# soundOfst = int(float(nData['oFStart'])) - int(float(nData['handle'])) - fFrame
|
||||
# soundFile = mu.giveMePublishedAudio()
|
||||
# self.log.info("SOUND offset %s" % str(soundOfst))
|
||||
# self.log.info("SOUND source video to %s" % str(soundFile))
|
||||
# ann = dHUD.draftAnnotate()
|
||||
# if soundFile:
|
||||
# ann.addAnotation(seqFls = fls, outputMoviePth = movieFullPth, annotateDataArr = nData, soundFile = soundFile, soundOffset = soundOfst)
|
||||
# else:
|
||||
# ann.addAnotation(seqFls = fls, outputMoviePth = movieFullPth, annotateDataArr = nData)
|
||||
|
||||
# for f in fls:
|
||||
# os.remove(f)
|
||||
|
||||
# playblast = (ann.expPth).replace("\\","/")
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
|
|||
|
|
@ -54,10 +54,16 @@ class ExtractVRayProxy(pype.api.Extractor):
|
|||
ignoreHiddenObjects=True,
|
||||
createProxyNode=False)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(file_name)
|
||||
representation = {
|
||||
'name': 'vrmesh',
|
||||
'ext': '.vrmesh',
|
||||
'files': file_name,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s"
|
||||
% (instance.name, staging_dir))
|
||||
|
|
|
|||
|
|
@ -136,7 +136,7 @@ class LoadSequence(api.Loader):
|
|||
data_imprint.update({k: context["version"]['name']})
|
||||
else:
|
||||
data_imprint.update({k: context["version"]['data'][k]})
|
||||
|
||||
|
||||
data_imprint.update({"objectName": read_name})
|
||||
|
||||
r["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
|
|
|||
|
|
@ -76,7 +76,16 @@ class CollectNukeReads(pyblish.api.ContextPlugin):
|
|||
|
||||
self.log.debug("collected_frames: {}".format(label))
|
||||
|
||||
instance.data["files"] = [source_files]
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': ext,
|
||||
'ext': "." + ext,
|
||||
'files': source_files,
|
||||
"stagingDir": source_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
transfer = False
|
||||
if "publish" in node.knobs():
|
||||
|
|
|
|||
|
|
@ -63,16 +63,28 @@ class CollectNukeWrites(pyblish.api.ContextPlugin):
|
|||
int(last_frame)
|
||||
)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
try:
|
||||
collected_frames = os.listdir(output_dir)
|
||||
self.log.debug("collected_frames: {}".format(label))
|
||||
instance.data["files"].append(collected_frames)
|
||||
|
||||
representation = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'files': collected_frames,
|
||||
"stagingDir": output_dir,
|
||||
"anatomy_template": "render"
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
except Exception:
|
||||
self.log.debug("couldn't collect frames: {}".format(label))
|
||||
|
||||
|
||||
|
||||
# except Exception:
|
||||
# self.log.debug("couldn't collect frames: {}".format(label))
|
||||
|
||||
instance.data.update({
|
||||
"path": path,
|
||||
"outputDir": output_dir,
|
||||
|
|
|
|||
|
|
@ -49,18 +49,27 @@ class NukeRenderLocal(pype.api.Extractor):
|
|||
# swap path back to publish path
|
||||
path = node['file'].value()
|
||||
node['file'].setValue(path.replace(temp_dir, output_dir))
|
||||
ext = node["file_type"].value()
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"] = [os.listdir(temp_dir)]
|
||||
collected_frames = os.listdir(temp_dir)
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'files': collected_frames,
|
||||
"stagingDir": temp_dir,
|
||||
"anatomy_template": "render"
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name,
|
||||
output_dir
|
||||
temp_dir
|
||||
))
|
||||
|
||||
collections, remainder = clique.assemble(*instance.data['files'])
|
||||
collections, remainder = clique.assemble(collected_frames)
|
||||
self.log.info('collections: {}'.format(str(collections)))
|
||||
|
||||
if collections:
|
||||
|
|
|
|||
|
|
@ -50,10 +50,10 @@ class ExtractDataForReview(pype.api.Extractor):
|
|||
|
||||
def transcode_mov(self, instance):
|
||||
collection = instance.data["collection"]
|
||||
staging_dir = instance.data["stagingDir"].replace("\\", "/")
|
||||
stagingDir = instance.data["stagingDir"].replace("\\", "/")
|
||||
file_name = collection.format("{head}mov")
|
||||
|
||||
review_mov = os.path.join(staging_dir, file_name).replace("\\", "/")
|
||||
review_mov = os.path.join(stagingDir, file_name).replace("\\", "/")
|
||||
|
||||
self.log.info("transcoding review mov: {0}".format(review_mov))
|
||||
if instance.data.get("baked_colorspace_movie"):
|
||||
|
|
@ -75,18 +75,33 @@ class ExtractDataForReview(pype.api.Extractor):
|
|||
instance.data["baked_colorspace_movie"]))
|
||||
os.remove(instance.data["baked_colorspace_movie"])
|
||||
|
||||
instance.data["files"].append(file_name)
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'mov',
|
||||
'ext': 'mov',
|
||||
'files': file_name,
|
||||
"stagingDir": stagingDir,
|
||||
"anatomy_template": "render",
|
||||
"thumbnail": False,
|
||||
"preview": True,
|
||||
'startFrameReview': instance.data['startFrame'],
|
||||
'endFrameReview': instance.data['endFrame'],
|
||||
'frameRate': instance.context.data["framerate"]
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
def render_review_representation(self,
|
||||
instance,
|
||||
representation="mov"):
|
||||
|
||||
assert instance.data['files'], "Instance data files should't be empty!"
|
||||
assert instance.data['representations'][0]['files'], "Instance data files should't be empty!"
|
||||
|
||||
import nuke
|
||||
temporary_nodes = []
|
||||
staging_dir = instance.data["stagingDir"].replace("\\", "/")
|
||||
self.log.debug("StagingDir `{0}`...".format(staging_dir))
|
||||
stagingDir = instance.data["stagingDir"].replace("\\", "/")
|
||||
self.log.debug("StagingDir `{0}`...".format(stagingDir))
|
||||
|
||||
collection = instance.data.get("collection", None)
|
||||
|
||||
|
|
@ -108,7 +123,7 @@ class ExtractDataForReview(pype.api.Extractor):
|
|||
node = previous_node = nuke.createNode("Read")
|
||||
|
||||
node["file"].setValue(
|
||||
os.path.join(staging_dir, fname).replace("\\", "/"))
|
||||
os.path.join(stagingDir, fname).replace("\\", "/"))
|
||||
|
||||
node["first"].setValue(first_frame)
|
||||
node["origfirst"].setValue(first_frame)
|
||||
|
|
@ -147,7 +162,7 @@ class ExtractDataForReview(pype.api.Extractor):
|
|||
|
||||
if representation in "mov":
|
||||
file = fhead + "baked.mov"
|
||||
path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
path = os.path.join(stagingDir, file).replace("\\", "/")
|
||||
self.log.debug("Path: {}".format(path))
|
||||
instance.data["baked_colorspace_movie"] = path
|
||||
write_node["file"].setValue(path)
|
||||
|
|
@ -155,22 +170,39 @@ class ExtractDataForReview(pype.api.Extractor):
|
|||
write_node["raw"].setValue(1)
|
||||
write_node.setInput(0, previous_node)
|
||||
temporary_nodes.append(write_node)
|
||||
thumbnail = False
|
||||
preview = True
|
||||
|
||||
elif representation in "jpeg":
|
||||
file = fhead + "jpeg"
|
||||
path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
path = os.path.join(stagingDir, file).replace("\\", "/")
|
||||
instance.data["thumbnail"] = path
|
||||
write_node["file"].setValue(path)
|
||||
write_node["file_type"].setValue("jpeg")
|
||||
write_node["raw"].setValue(1)
|
||||
write_node.setInput(0, previous_node)
|
||||
temporary_nodes.append(write_node)
|
||||
thumbnail = True
|
||||
preview = False
|
||||
|
||||
# retime for
|
||||
first_frame = int(last_frame) / 2
|
||||
last_frame = int(last_frame) / 2
|
||||
# add into files for integration as representation
|
||||
instance.data["files"].append(file)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
repre = {
|
||||
'name': representation,
|
||||
'ext': representation,
|
||||
'files': file,
|
||||
"stagingDir": stagingDir,
|
||||
"anatomy_template": "render",
|
||||
"thumbnail": thumbnail,
|
||||
"preview": preview
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
# Render frames
|
||||
nuke.execute(write_node.name(), int(first_frame), int(last_frame))
|
||||
|
|
|
|||
|
|
@ -19,16 +19,22 @@ class ExtractScript(pype.api.Extractor):
|
|||
current_script = instance.context.data["currentFile"]
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{0}".format(instance.data["name"])
|
||||
path = os.path.join(dir_path, filename)
|
||||
path = os.path.join(stagingdir, filename)
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
shutil.copy(current_script, path)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = list()
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["files"].append(filename)
|
||||
representation = {
|
||||
'name': 'nk',
|
||||
'ext': '.nk',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name, path))
|
||||
|
|
|
|||
|
|
@ -30,33 +30,33 @@ class ValidatePrerenderedFrames(pyblish.api.InstancePlugin):
|
|||
hosts = ["nuke"]
|
||||
actions = [RepairCollectionAction]
|
||||
|
||||
|
||||
def process(self, instance):
|
||||
self.log.debug('instance.data["files"]: {}'.format(instance.data['files']))
|
||||
|
||||
assert instance.data.get('files'), "no frames were collected, you need to render them"
|
||||
for repre in instance.data.get('representations'):
|
||||
|
||||
collections, remainder = clique.assemble(*instance.data['files'])
|
||||
self.log.info('collections: {}'.format(str(collections)))
|
||||
assert repre.get('files'), "no frames were collected, you need to render them"
|
||||
|
||||
collection = collections[0]
|
||||
collections, remainder = clique.assemble(repre["files"])
|
||||
self.log.info('collections: {}'.format(str(collections)))
|
||||
|
||||
frame_length = instance.data["endFrame"] \
|
||||
- instance.data["startFrame"] + 1
|
||||
collection = collections[0]
|
||||
|
||||
if frame_length is not 1:
|
||||
assert len(collections) == 1, "There are multiple collections in the folder"
|
||||
assert collection.is_contiguous(), "Some frames appear to be missing"
|
||||
frame_length = instance.data["endFrame"] \
|
||||
- instance.data["startFrame"] + 1
|
||||
|
||||
assert remainder is not None, "There are some extra files in folder"
|
||||
if frame_length != 1:
|
||||
assert len(collections) == 1, "There are multiple collections in the folder"
|
||||
assert collection.is_contiguous(), "Some frames appear to be missing"
|
||||
|
||||
self.log.info('frame_length: {}'.format(frame_length))
|
||||
self.log.info('len(collection.indexes): {}'.format(
|
||||
len(collection.indexes)))
|
||||
assert remainder is not None, "There are some extra files in folder"
|
||||
|
||||
assert len(
|
||||
collection.indexes
|
||||
) is frame_length, "{} missing frames. Use "
|
||||
"repair to render all frames".format(__name__)
|
||||
self.log.info('frame_length: {}'.format(frame_length))
|
||||
self.log.info('len(collection.indexes): {}'.format(
|
||||
len(collection.indexes)))
|
||||
|
||||
instance.data['collection'] = collection
|
||||
assert len(
|
||||
collection.indexes
|
||||
) is frame_length, "{} missing frames. Use "
|
||||
"repair to render all frames".format(__name__)
|
||||
|
||||
instance.data['collection'] = collection
|
||||
|
|
|
|||
40
pype/plugins/nukestudio/_unused/validate_projectroot.py
Normal file
40
pype/plugins/nukestudio/_unused/validate_projectroot.py
Normal file
|
|
@ -0,0 +1,40 @@
|
|||
from pyblish import api
|
||||
|
||||
|
||||
class RepairProjectRoot(api.Action):
|
||||
|
||||
label = "Repair"
|
||||
icon = "wrench"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
import os
|
||||
|
||||
project_root = os.path.join(
|
||||
os.path.dirname(context.data["currentFile"])
|
||||
)
|
||||
|
||||
context.data["activeProject"].setProjectRoot(project_root)
|
||||
|
||||
|
||||
class ValidateProjectRoot(api.ContextPlugin):
|
||||
"""Validate the project root to the workspace directory."""
|
||||
|
||||
order = api.ValidatorOrder
|
||||
label = "Project Root"
|
||||
hosts = ["nukestudio"]
|
||||
actions = [RepairProjectRoot]
|
||||
|
||||
def process(self, context):
|
||||
import os
|
||||
|
||||
workspace = os.path.join(
|
||||
os.path.dirname(context.data["currentFile"])
|
||||
)
|
||||
project_root = context.data["activeProject"].projectRoot()
|
||||
|
||||
failure_message = (
|
||||
'The project root needs to be "{0}", its currently: "{1}"'
|
||||
).format(workspace, project_root)
|
||||
|
||||
assert project_root == workspace, failure_message
|
||||
|
|
@ -8,7 +8,7 @@ class ValidateClip(api.InstancePlugin):
|
|||
|
||||
order = api.ValidatorOrder
|
||||
families = ["clip"]
|
||||
match = api.Exact
|
||||
# match = api.Exact
|
||||
label = "Validate Track Item"
|
||||
hosts = ["nukestudio"]
|
||||
optional = True
|
||||
|
|
@ -4,6 +4,7 @@ import pyblish.api
|
|||
class CollectActiveProject(pyblish.api.ContextPlugin):
|
||||
"""Inject the active project into context"""
|
||||
|
||||
label = "Collect Active Project"
|
||||
order = pyblish.api.CollectorOrder - 0.2
|
||||
|
||||
def process(self, context):
|
||||
|
|
|
|||
|
|
@ -23,9 +23,8 @@ class CollectClips(api.ContextPlugin):
|
|||
|
||||
data[item.name()] = {
|
||||
"item": item,
|
||||
"tasks": [],
|
||||
"startFrame": item.timelineIn(),
|
||||
"endFrame": item.timelineOut()
|
||||
"startFrame": int(item.timelineIn()),
|
||||
"endFrame": int(item.timelineOut())
|
||||
}
|
||||
|
||||
for key, value in data.items():
|
||||
|
|
@ -36,7 +35,6 @@ class CollectClips(api.ContextPlugin):
|
|||
asset=value["item"].name(),
|
||||
item=value["item"],
|
||||
family=family,
|
||||
tasks=value["tasks"],
|
||||
startFrame=value["startFrame"],
|
||||
endFrame=value["endFrame"],
|
||||
handles=0
|
||||
|
|
|
|||
|
|
@ -3,11 +3,10 @@ from pyblish import api
|
|||
class CollectFramerate(api.ContextPlugin):
|
||||
"""Collect framerate from selected sequence."""
|
||||
|
||||
order = api.CollectorOrder
|
||||
order = api.CollectorOrder + 0.01
|
||||
label = "Collect Framerate"
|
||||
hosts = ["nukestudio"]
|
||||
|
||||
def process(self, context):
|
||||
for item in context.data.get("selection", []):
|
||||
context.data["framerate"] = item.sequence().framerate().toFloat()
|
||||
return
|
||||
sequence = context.data["activeSequence"]
|
||||
context.data["framerate"] = sequence.framerate().toFloat()
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
import pyblish.api
|
||||
from avalon import api
|
||||
import avalon.api as avalon
|
||||
import re
|
||||
|
||||
|
||||
class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
||||
class CollectHierarchyInstance(pyblish.api.InstancePlugin):
|
||||
"""Collecting hierarchy context from `parents` and `hierarchy` data
|
||||
present in `clip` family instances coming from the request json data file
|
||||
|
||||
|
|
@ -11,8 +12,118 @@ class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
|||
don't exist yet
|
||||
"""
|
||||
|
||||
label = "Collect Hierarchy Context"
|
||||
label = "Collect Hierarchy Clip"
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
families = ["clip"]
|
||||
|
||||
def convert_to_entity(self, key, value):
|
||||
# ftrack compatible entity types
|
||||
types = {"shot": "Shot",
|
||||
"folder": "Folder",
|
||||
"episode": "Episode",
|
||||
"sequence": "Sequence",
|
||||
"track": "Sequence",
|
||||
}
|
||||
# convert to entity type
|
||||
entity_type = types.get(key, None)
|
||||
|
||||
# return if any
|
||||
if entity_type:
|
||||
return {"entityType": entity_type, "entityName": value}
|
||||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
tags = instance.data.get("tags", None)
|
||||
clip = instance.data["item"]
|
||||
asset = instance.data.get("asset")
|
||||
|
||||
# build data for inner nukestudio project property
|
||||
data = {
|
||||
"sequence": context.data['activeSequence'].name().replace(' ', '_'),
|
||||
"track": clip.parent().name().replace(' ', '_'),
|
||||
"shot": asset
|
||||
}
|
||||
self.log.debug("__ data: {}".format(data))
|
||||
|
||||
# checking if tags are available
|
||||
if not tags:
|
||||
return
|
||||
|
||||
# loop trough all tags
|
||||
for t in tags:
|
||||
t_metadata = dict(t["metadata"])
|
||||
t_type = t_metadata.get("tag.label", "")
|
||||
t_note = t_metadata.get("tag.note", "")
|
||||
|
||||
# and finding only hierarchical tag
|
||||
if "hierarchy" in t_type.lower():
|
||||
d_metadata = dict()
|
||||
parents = list()
|
||||
|
||||
# main template from Tag.note
|
||||
template = t_note
|
||||
|
||||
# if shot in template then remove it
|
||||
if "shot" in template.lower():
|
||||
template = "/".join([t for t in template.split('/')][0:-1])
|
||||
|
||||
# take template from Tag.note and break it into parts
|
||||
patern = re.compile(r"^\{([a-z]*?)\}")
|
||||
par_split = [patern.findall(t)[0]
|
||||
for t in template.split("/")]
|
||||
|
||||
# format all {} in two layers
|
||||
for k, v in t_metadata.items():
|
||||
new_k = k.split(".")[1]
|
||||
try:
|
||||
# first try all data and context data to
|
||||
# add to individual properties
|
||||
new_v = str(v).format(
|
||||
**dict(context.data, **data))
|
||||
d_metadata[new_k] = new_v
|
||||
|
||||
if 'shot' in new_k:
|
||||
instance.data["asset"] = new_v
|
||||
|
||||
# create parents
|
||||
# find matching index of order
|
||||
p_match_i = [i for i, p in enumerate(par_split)
|
||||
if new_k in p]
|
||||
|
||||
# if any is matching then convert to entity_types
|
||||
if p_match_i:
|
||||
parent = self.convert_to_entity(new_k, new_v)
|
||||
parents.insert(p_match_i[0], parent)
|
||||
except Exception:
|
||||
d_metadata[new_k] = v
|
||||
|
||||
# lastly fill those individual properties itno
|
||||
# format the string with collected data
|
||||
hierarchy = template.format(
|
||||
**d_metadata)
|
||||
|
||||
# check if hierarchy attribute is already created
|
||||
# it should not be so return warning if it is
|
||||
hd = instance.data.get("hierarchy")
|
||||
self.log.info("__ hd: {}".format(hd))
|
||||
assert not hd, "Only one Hierarchy Tag is \
|
||||
allowed. Clip: `{}`".format(asset)
|
||||
|
||||
# add formated hierarchy path into instance data
|
||||
instance.data["hierarchy"] = hierarchy
|
||||
instance.data["parents"] = parents
|
||||
self.log.debug("__ hierarchy.format: {}".format(hierarchy))
|
||||
self.log.debug("__ parents: {}".format(parents))
|
||||
self.log.debug("__ d_metadata: {}".format(d_metadata))
|
||||
|
||||
|
||||
class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
||||
'''Collecting Hierarchy from instaces and building
|
||||
context hierarchy tree
|
||||
'''
|
||||
|
||||
label = "Collect Hierarchy Context"
|
||||
order = pyblish.api.CollectorOrder + 0.101
|
||||
|
||||
def update_dict(self, ex_dict, new_dict):
|
||||
for key in ex_dict:
|
||||
|
|
@ -23,28 +134,31 @@ class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
|||
return new_dict
|
||||
|
||||
def process(self, context):
|
||||
json_data = context.data.get("jsonData", None)
|
||||
instances = context[:]
|
||||
# create hierarchyContext attr if context has none
|
||||
self.log.debug("__ instances: {}".format(context[:]))
|
||||
|
||||
temp_context = {}
|
||||
for instance in json_data['instances']:
|
||||
if instance['family'] in 'projectfile':
|
||||
for instance in instances:
|
||||
if 'projectfile' in instance.data.get('family', ''):
|
||||
continue
|
||||
|
||||
|
||||
in_info = {}
|
||||
name = instance['name']
|
||||
name = instance.data["asset"]
|
||||
# suppose that all instances are Shots
|
||||
in_info['entity_type'] = 'Shot'
|
||||
|
||||
instance_pyblish = [
|
||||
i for i in context.data["instances"] if i.data['asset'] in name][0]
|
||||
# get custom attributes of the shot
|
||||
in_info['custom_attributes'] = {
|
||||
'fend': instance_pyblish.data['endFrame'],
|
||||
'fstart': instance_pyblish.data['startFrame'],
|
||||
'fps': instance_pyblish.data['fps']
|
||||
'fend': instance.data['endFrame'],
|
||||
'fstart': instance.data['startFrame'],
|
||||
'fps': context.data["framerate"]
|
||||
}
|
||||
|
||||
in_info['tasks'] = instance['tasks']
|
||||
in_info['tasks'] = instance.data['tasks']
|
||||
|
||||
parents = instance.get('parents', [])
|
||||
parents = instance.data.get('parents', [])
|
||||
|
||||
actual = {name: in_info}
|
||||
|
||||
|
|
@ -60,7 +174,7 @@ class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
|||
self.log.debug(temp_context)
|
||||
|
||||
# TODO: 100% sure way of get project! Will be Name or Code?
|
||||
project_name = api.Session["AVALON_PROJECT"]
|
||||
project_name = avalon.Session["AVALON_PROJECT"]
|
||||
final_context = {}
|
||||
final_context[project_name] = {}
|
||||
final_context[project_name]['entity_type'] = 'Project'
|
||||
|
|
|
|||
15
pype/plugins/nukestudio/publish/collect_project_root.py
Normal file
15
pype/plugins/nukestudio/publish/collect_project_root.py
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
import pyblish.api
|
||||
import avalon.api as avalon
|
||||
import os
|
||||
|
||||
class CollectActiveProjectRoot(pyblish.api.ContextPlugin):
|
||||
"""Inject the active project into context"""
|
||||
|
||||
label = "Collect Project Root"
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
|
||||
def process(self, context):
|
||||
S = avalon.Session
|
||||
context.data["projectroot"] = os.path.normpath(
|
||||
os.path.join(S['AVALON_PROJECTS'], S['AVALON_PROJECT'])
|
||||
)
|
||||
13
pype/plugins/nukestudio/publish/collect_sequence.py
Normal file
13
pype/plugins/nukestudio/publish/collect_sequence.py
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
from pyblish import api
|
||||
import hiero
|
||||
|
||||
|
||||
class CollectSequence(api.ContextPlugin):
|
||||
"""Collect all Track items selection."""
|
||||
|
||||
order = api.CollectorOrder
|
||||
label = "Collect Sequence"
|
||||
hosts = ["nukestudio"]
|
||||
|
||||
def process(self, context):
|
||||
context.data['activeSequence'] = hiero.ui.activeSequence()
|
||||
41
pype/plugins/nukestudio/publish/collect_tag_tasks.py
Normal file
41
pype/plugins/nukestudio/publish/collect_tag_tasks.py
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
from pyblish import api
|
||||
|
||||
|
||||
class CollectClipTagTasks(api.InstancePlugin):
|
||||
"""Collect Tags from selected track items."""
|
||||
|
||||
order = api.CollectorOrder + 0.006
|
||||
label = "Collect Tag Tasks"
|
||||
hosts = ["nukestudio"]
|
||||
families = ['clip']
|
||||
|
||||
def process(self, instance):
|
||||
# gets tags
|
||||
tags = instance.data["tags"]
|
||||
|
||||
# gets presets for nukestudio
|
||||
presets = instance.context.data['presets'][
|
||||
instance.context.data['host']]
|
||||
|
||||
# find preset for default task
|
||||
default_tasks = presets['rules_tasks']['defaultTasks']
|
||||
|
||||
tasks = list()
|
||||
for t in tags:
|
||||
t_metadata = dict(t["metadata"])
|
||||
t_family = t_metadata.get("tag.family", "")
|
||||
|
||||
# gets only task family tags and collect labels
|
||||
if "task" in t_family:
|
||||
t_task = t_metadata.get("tag.label", "")
|
||||
tasks.append(t_task)
|
||||
|
||||
if tasks:
|
||||
instance.data["tasks"] = tasks
|
||||
else:
|
||||
# add tasks from presets if no task tag
|
||||
instance.data["tasks"] = default_tasks
|
||||
|
||||
self.log.info("Collected Tasks from Tags: `{}`".format(
|
||||
instance.data["tasks"]))
|
||||
return
|
||||
|
|
@ -4,7 +4,7 @@ from pyblish import api
|
|||
class CollectClipTags(api.InstancePlugin):
|
||||
"""Collect Tags from selected track items."""
|
||||
|
||||
order = api.CollectorOrder
|
||||
order = api.CollectorOrder + 0.005
|
||||
label = "Collect Tags"
|
||||
hosts = ["nukestudio"]
|
||||
families = ['clip']
|
||||
|
|
|
|||
|
|
@ -19,9 +19,10 @@ class IntegrateAssumedDestination(pyblish.api.InstancePlugin):
|
|||
# template = instance.data["template"]
|
||||
|
||||
anatomy = instance.context.data['anatomy']
|
||||
# template = anatomy.publish.path
|
||||
# self.log.info(anatomy.templates)
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
mock_template = anatomy_filled.publish.path
|
||||
# self.log.info(anatomy_filled)
|
||||
mock_template = anatomy_filled["publish"]["path"]
|
||||
|
||||
# For now assume resources end up in a "resources" folder in the
|
||||
# published folder
|
||||
|
|
|
|||
|
|
@ -40,12 +40,9 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
|
||||
input_data = context.data["hierarchyContext"]
|
||||
|
||||
# adding ftrack types from presets
|
||||
ftrack_types = context.data['ftrackTypes']
|
||||
self.import_to_ftrack(input_data)
|
||||
|
||||
self.import_to_ftrack(input_data, ftrack_types)
|
||||
|
||||
def import_to_ftrack(self, input_data, ftrack_types, parent=None):
|
||||
def import_to_ftrack(self, input_data, parent=None):
|
||||
for entity_name in input_data:
|
||||
entity_data = input_data[entity_name]
|
||||
entity_type = entity_data['entity_type'].capitalize()
|
||||
|
|
@ -82,7 +79,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
# CUSTOM ATTRIBUTES
|
||||
custom_attributes = entity_data.get('custom_attributes', [])
|
||||
instances = [
|
||||
i for i in self.context.data["instances"] if i.data['asset'] in entity['name']]
|
||||
i for i in self.context[:] if i.data['asset'] in entity['name']]
|
||||
for key in custom_attributes:
|
||||
assert (key in entity['custom_attributes']), (
|
||||
'Missing custom attribute')
|
||||
|
|
@ -111,14 +108,14 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
for task in tasks_to_create:
|
||||
self.create_task(
|
||||
name=task,
|
||||
task_type=ftrack_types[task],
|
||||
task_type=task.capitalize(),
|
||||
parent=entity
|
||||
)
|
||||
self.session.commit()
|
||||
|
||||
if 'childs' in entity_data:
|
||||
self.import_to_ftrack(
|
||||
entity_data['childs'], ftrack_types, entity)
|
||||
entity_data['childs'], entity)
|
||||
|
||||
def get_all_task_types(self, project):
|
||||
tasks = {}
|
||||
|
|
|
|||
22
pype/plugins/nukestudio/publish/validate_hierarchy.py
Normal file
22
pype/plugins/nukestudio/publish/validate_hierarchy.py
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
from pyblish import api
|
||||
|
||||
|
||||
class ValidateHierarchy(api.InstancePlugin):
|
||||
"""Validate clip's hierarchy data.
|
||||
|
||||
"""
|
||||
|
||||
order = api.ValidatorOrder
|
||||
families = ["clip"]
|
||||
label = "Validate Hierarchy"
|
||||
hosts = ["nukestudio"]
|
||||
|
||||
def process(self, instance):
|
||||
asset_name = instance.data.get("asset", None)
|
||||
hierarchy = instance.data.get("hierarchy", None)
|
||||
parents = instance.data.get("parents", None)
|
||||
|
||||
assert hierarchy, "Hierarchy Tag has to be set \
|
||||
and added to clip `{}`".format(asset_name)
|
||||
assert parents, "Parents build from Hierarchy Tag has \
|
||||
to be set and added to clip `{}`".format(asset_name)
|
||||
|
|
@ -29,14 +29,3 @@ class ValidateNames(api.InstancePlugin):
|
|||
msg = "Sequence \"{0}\" ends with a whitespace."
|
||||
msg = msg.format(item.parent().parent().name())
|
||||
assert not item.parent().parent().name().endswith(" "), msg
|
||||
|
||||
|
||||
class ValidateNamesFtrack(ValidateNames):
|
||||
"""Validate sequence, video track and track item names.
|
||||
|
||||
Because we are matching the families exactly, we need this plugin to
|
||||
accommodate for the ftrack family addition.
|
||||
"""
|
||||
|
||||
order = api.ValidatorOrder
|
||||
families = ["clip", "ftrack"]
|
||||
|
|
|
|||
|
|
@ -1,52 +0,0 @@
|
|||
from pyblish import api
|
||||
|
||||
|
||||
class RepairProjectRoot(api.Action):
|
||||
|
||||
label = "Repair"
|
||||
icon = "wrench"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
import os
|
||||
|
||||
workspace = os.path.join(
|
||||
os.path.dirname(context.data["currentFile"]),
|
||||
"workspace"
|
||||
).replace("\\", "/")
|
||||
|
||||
if not os.path.exists(workspace):
|
||||
os.makedirs(workspace)
|
||||
|
||||
context.data["activeProject"].setProjectRoot(workspace)
|
||||
|
||||
# Need to manually fix the tasks "_projectRoot" attribute, because
|
||||
# setting the project root is not enough.
|
||||
submission = context.data.get("submission", None)
|
||||
if submission:
|
||||
for task in submission.getLeafTasks():
|
||||
task._projectRoot = workspace
|
||||
|
||||
|
||||
class ValidateProjectRoot(api.ContextPlugin):
|
||||
"""Validate the project root to the workspace directory."""
|
||||
|
||||
order = api.ValidatorOrder
|
||||
label = "Project Root"
|
||||
hosts = ["nukestudio"]
|
||||
actions = [RepairProjectRoot]
|
||||
|
||||
def process(self, context):
|
||||
import os
|
||||
|
||||
workspace = os.path.join(
|
||||
os.path.dirname(context.data["currentFile"]),
|
||||
"workspace"
|
||||
).replace("\\", "/")
|
||||
project_root = context.data["activeProject"].projectRoot()
|
||||
|
||||
failure_message = (
|
||||
'The project root needs to be "{0}", its currently: "{1}"'
|
||||
).format(workspace, project_root)
|
||||
|
||||
assert project_root == workspace, failure_message
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
|
||||
try:
|
||||
import ftrack_api_old as ftrack_api
|
||||
except Exception:
|
||||
import ftrack_api
|
||||
|
||||
|
||||
class CollectFtrackApi(pyblish.api.ContextPlugin):
|
||||
""" Collects an ftrack session and the current task id. """
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Ftrack Api"
|
||||
|
||||
def process(self, context):
|
||||
|
||||
# Collect session
|
||||
session = ftrack_api.Session()
|
||||
context.data["ftrackSession"] = session
|
||||
|
||||
# Collect task
|
||||
|
||||
project = os.environ.get('AVALON_PROJECT', '')
|
||||
asset = os.environ.get('AVALON_ASSET', '')
|
||||
task = os.environ.get('AVALON_TASK', None)
|
||||
|
||||
if task:
|
||||
result = session.query('Task where\
|
||||
project.full_name is "{0}" and\
|
||||
name is "{1}" and\
|
||||
parent.name is "{2}"'.format(project, task, asset)).one()
|
||||
context.data["ftrackTask"] = result
|
||||
else:
|
||||
result = session.query('TypedContext where\
|
||||
project.full_name is "{0}" and\
|
||||
name is "{1}"'.format(project, asset)).one()
|
||||
context.data["ftrackEntity"] = result
|
||||
|
||||
self.log.info(result)
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
|
||||
import pype.api as pype
|
||||
from pypeapp import Anatomy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectTemplates(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Templates"
|
||||
|
||||
def process(self, context):
|
||||
# pype.load_data_from_templates()
|
||||
context.data['anatomy'] = Anatomy()
|
||||
self.log.info("Anatomy templates collected...")
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
import pyblish.api
|
||||
from avalon import api
|
||||
|
||||
|
||||
class CollectTime(pyblish.api.ContextPlugin):
|
||||
"""Store global time at the time of publish"""
|
||||
|
||||
label = "Collect Current Time"
|
||||
order = pyblish.api.CollectorOrder
|
||||
|
||||
def process(self, context):
|
||||
context.data["time"] = api.time()
|
||||
|
|
@ -1,315 +0,0 @@
|
|||
import os
|
||||
import sys
|
||||
import pyblish.api
|
||||
import clique
|
||||
|
||||
|
||||
class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
||||
""" Commit components to server. """
|
||||
|
||||
order = pyblish.api.IntegratorOrder+0.499
|
||||
label = "Integrate Ftrack Api"
|
||||
families = ["ftrack"]
|
||||
|
||||
def query(self, entitytype, data):
|
||||
""" Generate a query expression from data supplied.
|
||||
|
||||
If a value is not a string, we'll add the id of the entity to the
|
||||
query.
|
||||
|
||||
Args:
|
||||
entitytype (str): The type of entity to query.
|
||||
data (dict): The data to identify the entity.
|
||||
exclusions (list): All keys to exclude from the query.
|
||||
|
||||
Returns:
|
||||
str: String query to use with "session.query"
|
||||
"""
|
||||
queries = []
|
||||
if sys.version_info[0] < 3:
|
||||
for key, value in data.iteritems():
|
||||
if not isinstance(value, (basestring, int)):
|
||||
self.log.info("value: {}".format(value))
|
||||
if "id" in value.keys():
|
||||
queries.append(
|
||||
"{0}.id is \"{1}\"".format(key, value["id"])
|
||||
)
|
||||
else:
|
||||
queries.append("{0} is \"{1}\"".format(key, value))
|
||||
else:
|
||||
for key, value in data.items():
|
||||
if not isinstance(value, (str, int)):
|
||||
self.log.info("value: {}".format(value))
|
||||
if "id" in value.keys():
|
||||
queries.append(
|
||||
"{0}.id is \"{1}\"".format(key, value["id"])
|
||||
)
|
||||
else:
|
||||
queries.append("{0} is \"{1}\"".format(key, value))
|
||||
|
||||
query = (
|
||||
"select id from " + entitytype + " where " + " and ".join(queries)
|
||||
)
|
||||
self.log.debug(query)
|
||||
return query
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
session = instance.context.data["ftrackSession"]
|
||||
if instance.context.data.get("ftrackTask"):
|
||||
task = instance.context.data["ftrackTask"]
|
||||
name = task['full_name']
|
||||
parent = task["parent"]
|
||||
elif instance.context.data.get("ftrackEntity"):
|
||||
task = None
|
||||
name = instance.context.data.get("ftrackEntity")['name']
|
||||
parent = instance.context.data.get("ftrackEntity")
|
||||
|
||||
info_msg = "Created new {entity_type} with data: {data}"
|
||||
info_msg += ", metadata: {metadata}."
|
||||
|
||||
# Iterate over components and publish
|
||||
for data in instance.data.get("ftrackComponentsList", []):
|
||||
|
||||
# AssetType
|
||||
# Get existing entity.
|
||||
assettype_data = {"short": "upload"}
|
||||
assettype_data.update(data.get("assettype_data", {}))
|
||||
self.log.debug("data: {}".format(data))
|
||||
|
||||
assettype_entity = session.query(
|
||||
self.query("AssetType", assettype_data)
|
||||
).first()
|
||||
|
||||
# Create a new entity if none exits.
|
||||
if not assettype_entity:
|
||||
assettype_entity = session.create("AssetType", assettype_data)
|
||||
self.log.debug(
|
||||
"Created new AssetType with data: ".format(assettype_data)
|
||||
)
|
||||
|
||||
# Asset
|
||||
# Get existing entity.
|
||||
asset_data = {
|
||||
"name": name,
|
||||
"type": assettype_entity,
|
||||
"parent": parent,
|
||||
}
|
||||
asset_data.update(data.get("asset_data", {}))
|
||||
|
||||
asset_entity = session.query(
|
||||
self.query("Asset", asset_data)
|
||||
).first()
|
||||
|
||||
self.log.info("asset entity: {}".format(asset_entity))
|
||||
|
||||
# Extracting metadata, and adding after entity creation. This is
|
||||
# due to a ftrack_api bug where you can't add metadata on creation.
|
||||
asset_metadata = asset_data.pop("metadata", {})
|
||||
|
||||
# Create a new entity if none exits.
|
||||
if not asset_entity:
|
||||
asset_entity = session.create("Asset", asset_data)
|
||||
self.log.debug(
|
||||
info_msg.format(
|
||||
entity_type="Asset",
|
||||
data=asset_data,
|
||||
metadata=asset_metadata
|
||||
)
|
||||
)
|
||||
|
||||
# Adding metadata
|
||||
existing_asset_metadata = asset_entity["metadata"]
|
||||
existing_asset_metadata.update(asset_metadata)
|
||||
asset_entity["metadata"] = existing_asset_metadata
|
||||
|
||||
# AssetVersion
|
||||
# Get existing entity.
|
||||
assetversion_data = {
|
||||
"version": 0,
|
||||
"asset": asset_entity,
|
||||
}
|
||||
if task:
|
||||
assetversion_data['task'] = task
|
||||
|
||||
assetversion_data.update(data.get("assetversion_data", {}))
|
||||
|
||||
assetversion_entity = session.query(
|
||||
self.query("AssetVersion", assetversion_data)
|
||||
).first()
|
||||
|
||||
# Extracting metadata, and adding after entity creation. This is
|
||||
# due to a ftrack_api bug where you can't add metadata on creation.
|
||||
assetversion_metadata = assetversion_data.pop("metadata", {})
|
||||
|
||||
# Create a new entity if none exits.
|
||||
if not assetversion_entity:
|
||||
assetversion_entity = session.create(
|
||||
"AssetVersion", assetversion_data
|
||||
)
|
||||
self.log.debug(
|
||||
info_msg.format(
|
||||
entity_type="AssetVersion",
|
||||
data=assetversion_data,
|
||||
metadata=assetversion_metadata
|
||||
)
|
||||
)
|
||||
|
||||
# Adding metadata
|
||||
existing_assetversion_metadata = assetversion_entity["metadata"]
|
||||
existing_assetversion_metadata.update(assetversion_metadata)
|
||||
assetversion_entity["metadata"] = existing_assetversion_metadata
|
||||
|
||||
# Have to commit the version and asset, because location can't
|
||||
# determine the final location without.
|
||||
session.commit()
|
||||
|
||||
# Component
|
||||
# Get existing entity.
|
||||
component_data = {
|
||||
"name": "main",
|
||||
"version": assetversion_entity
|
||||
}
|
||||
component_data.update(data.get("component_data", {}))
|
||||
|
||||
component_entity = session.query(
|
||||
self.query("Component", component_data)
|
||||
).first()
|
||||
|
||||
component_overwrite = data.get("component_overwrite", False)
|
||||
location = data.get("component_location", session.pick_location())
|
||||
|
||||
# Overwrite existing component data if requested.
|
||||
if component_entity and component_overwrite:
|
||||
|
||||
origin_location = session.query(
|
||||
"Location where name is \"ftrack.origin\""
|
||||
).one()
|
||||
|
||||
# Removing existing members from location
|
||||
components = list(component_entity.get("members", []))
|
||||
components += [component_entity]
|
||||
for component in components:
|
||||
for loc in component["component_locations"]:
|
||||
if location["id"] == loc["location_id"]:
|
||||
location.remove_component(
|
||||
component, recursive=False
|
||||
)
|
||||
|
||||
# Deleting existing members on component entity
|
||||
for member in component_entity.get("members", []):
|
||||
session.delete(member)
|
||||
del(member)
|
||||
|
||||
session.commit()
|
||||
|
||||
# Reset members in memory
|
||||
if "members" in component_entity.keys():
|
||||
component_entity["members"] = []
|
||||
|
||||
# Add components to origin location
|
||||
try:
|
||||
collection = clique.parse(data["component_path"])
|
||||
except ValueError:
|
||||
# Assume its a single file
|
||||
# Changing file type
|
||||
name, ext = os.path.splitext(data["component_path"])
|
||||
component_entity["file_type"] = ext
|
||||
|
||||
origin_location.add_component(
|
||||
component_entity, data["component_path"]
|
||||
)
|
||||
else:
|
||||
# Changing file type
|
||||
component_entity["file_type"] = collection.format("{tail}")
|
||||
|
||||
# Create member components for sequence.
|
||||
for member_path in collection:
|
||||
|
||||
size = 0
|
||||
try:
|
||||
size = os.path.getsize(member_path)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
name = collection.match(member_path).group("index")
|
||||
|
||||
member_data = {
|
||||
"name": name,
|
||||
"container": component_entity,
|
||||
"size": size,
|
||||
"file_type": os.path.splitext(member_path)[-1]
|
||||
}
|
||||
|
||||
component = session.create(
|
||||
"FileComponent", member_data
|
||||
)
|
||||
origin_location.add_component(
|
||||
component, member_path, recursive=False
|
||||
)
|
||||
component_entity["members"].append(component)
|
||||
|
||||
# Add components to location.
|
||||
location.add_component(
|
||||
component_entity, origin_location, recursive=True
|
||||
)
|
||||
|
||||
data["component"] = component_entity
|
||||
msg = "Overwriting Component with path: {0}, data: {1}, "
|
||||
msg += "location: {2}"
|
||||
self.log.info(
|
||||
msg.format(
|
||||
data["component_path"],
|
||||
component_data,
|
||||
location
|
||||
)
|
||||
)
|
||||
|
||||
# Extracting metadata, and adding after entity creation. This is
|
||||
# due to a ftrack_api bug where you can't add metadata on creation.
|
||||
component_metadata = component_data.pop("metadata", {})
|
||||
|
||||
# Create new component if none exists.
|
||||
new_component = False
|
||||
if not component_entity:
|
||||
component_entity = assetversion_entity.create_component(
|
||||
data["component_path"],
|
||||
data=component_data,
|
||||
location=location
|
||||
)
|
||||
data["component"] = component_entity
|
||||
msg = "Created new Component with path: {0}, data: {1}"
|
||||
msg += ", metadata: {2}, location: {3}"
|
||||
self.log.info(
|
||||
msg.format(
|
||||
data["component_path"],
|
||||
component_data,
|
||||
component_metadata,
|
||||
location
|
||||
)
|
||||
)
|
||||
new_component = True
|
||||
|
||||
# Adding metadata
|
||||
existing_component_metadata = component_entity["metadata"]
|
||||
existing_component_metadata.update(component_metadata)
|
||||
component_entity["metadata"] = existing_component_metadata
|
||||
|
||||
# if component_data['name'] = 'ftrackreview-mp4-mp4':
|
||||
# assetversion_entity["thumbnail_id"]
|
||||
|
||||
# Setting assetversion thumbnail
|
||||
if data.get("thumbnail", False):
|
||||
assetversion_entity["thumbnail_id"] = component_entity["id"]
|
||||
|
||||
# Inform user about no changes to the database.
|
||||
if (component_entity and not component_overwrite and
|
||||
not new_component):
|
||||
data["component"] = component_entity
|
||||
self.log.info(
|
||||
"Found existing component, and no request to overwrite. "
|
||||
"Nothing has been changed."
|
||||
)
|
||||
else:
|
||||
# Commit changes.
|
||||
session.commit()
|
||||
|
|
@ -1,101 +0,0 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import json
|
||||
|
||||
|
||||
class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
||||
"""Collect ftrack component data
|
||||
|
||||
Add ftrack component list to instance.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.48
|
||||
label = 'Integrate Ftrack Component'
|
||||
families = ["ftrack"]
|
||||
|
||||
family_mapping = {'camera': 'cam',
|
||||
'look': 'look',
|
||||
'mayaAscii': 'scene',
|
||||
'model': 'geo',
|
||||
'rig': 'rig',
|
||||
'setdress': 'setdress',
|
||||
'pointcache': 'cache',
|
||||
'write': 'img',
|
||||
'render': 'render',
|
||||
'nukescript': 'comp',
|
||||
'review': 'mov'}
|
||||
|
||||
def process(self, instance):
|
||||
self.log.debug('instance {}'.format(instance))
|
||||
|
||||
if instance.data.get('version'):
|
||||
version_number = int(instance.data.get('version'))
|
||||
|
||||
family = instance.data['family'].lower()
|
||||
|
||||
asset_type = ''
|
||||
asset_type = self.family_mapping[family]
|
||||
|
||||
componentList = []
|
||||
ft_session = instance.context.data["ftrackSession"]
|
||||
|
||||
components = instance.data['representations']
|
||||
|
||||
for comp in components:
|
||||
self.log.debug('component {}'.format(comp))
|
||||
# filename, ext = os.path.splitext(file)
|
||||
# self.log.debug('dest ext: ' + ext)
|
||||
|
||||
# ext = comp['Context']
|
||||
|
||||
if comp['thumbnail']:
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.server"').one()
|
||||
component_data = {
|
||||
"name": "thumbnail" # Default component name is "main".
|
||||
}
|
||||
elif comp['preview']:
|
||||
if not comp.get('startFrameReview'):
|
||||
comp['startFrameReview'] = comp['startFrame']
|
||||
if not comp.get('endFrameReview'):
|
||||
comp['endFrameReview'] = instance.data['endFrame']
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.server"').one()
|
||||
component_data = {
|
||||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {'ftr_meta': json.dumps({
|
||||
'frameIn': int(comp['startFrameReview']),
|
||||
'frameOut': int(comp['endFrameReview']),
|
||||
'frameRate': float(comp['frameRate')]})}
|
||||
}
|
||||
else:
|
||||
component_data = {
|
||||
"name": comp['representation'] # Default component name is "main".
|
||||
}
|
||||
location = ft_session.query(
|
||||
'Location where name is "ftrack.unmanaged"').one()
|
||||
|
||||
self.log.debug('location {}'.format(location))
|
||||
|
||||
componentList.append({"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
},
|
||||
"component_data": component_data,
|
||||
"component_path": comp['published_path'],
|
||||
'component_location': location,
|
||||
"component_overwrite": False,
|
||||
"thumbnail": comp['thumbnail']
|
||||
}
|
||||
)
|
||||
|
||||
self.log.debug('componentsList: {}'.format(str(componentList)))
|
||||
instance.data["ftrackComponentsList"] = componentList
|
||||
|
|
@ -1,436 +0,0 @@
|
|||
import os
|
||||
import logging
|
||||
import shutil
|
||||
import clique
|
||||
|
||||
import errno
|
||||
import pyblish.api
|
||||
from avalon import api, io
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class IntegrateFrames(pyblish.api.InstancePlugin):
|
||||
"""Resolve any dependency issies
|
||||
|
||||
This plug-in resolves any paths which, if not updated might break
|
||||
the published file.
|
||||
|
||||
The order of families is important, when working with lookdev you want to
|
||||
first publish the texture, update the texture paths in the nodes and then
|
||||
publish the shading network. Same goes for file dependent assets.
|
||||
"""
|
||||
|
||||
label = "Integrate Frames"
|
||||
order = pyblish.api.IntegratorOrder
|
||||
families = [
|
||||
"imagesequence",
|
||||
"render",
|
||||
"write",
|
||||
"source",
|
||||
'review']
|
||||
|
||||
family_targets = [".frames", ".local", ".review", "review", "imagesequence", "render", "source"]
|
||||
exclude_families = ["clip"]
|
||||
|
||||
def process(self, instance):
|
||||
if [ef for ef in self.exclude_families
|
||||
if instance.data["family"] in ef]:
|
||||
return
|
||||
|
||||
families = [f for f in instance.data["families"]
|
||||
for search in self.family_targets
|
||||
if search in f]
|
||||
|
||||
if not families:
|
||||
return
|
||||
|
||||
self.register(instance)
|
||||
|
||||
# self.log.info("Integrating Asset in to the database ...")
|
||||
# self.log.info("instance.data: {}".format(instance.data))
|
||||
if instance.data.get('transfer', True):
|
||||
self.integrate(instance)
|
||||
|
||||
def register(self, instance):
|
||||
|
||||
# Required environment variables
|
||||
PROJECT = api.Session["AVALON_PROJECT"]
|
||||
ASSET = instance.data.get("asset") or api.Session["AVALON_ASSET"]
|
||||
LOCATION = api.Session["AVALON_LOCATION"]
|
||||
|
||||
context = instance.context
|
||||
# Atomicity
|
||||
#
|
||||
# Guarantee atomic publishes - each asset contains
|
||||
# an identical set of members.
|
||||
# __
|
||||
# / o
|
||||
# / \
|
||||
# | o |
|
||||
# \ /
|
||||
# o __/
|
||||
#
|
||||
assert all(result["success"] for result in context.data["results"]), (
|
||||
"Atomicity not held, aborting.")
|
||||
|
||||
# Assemble
|
||||
#
|
||||
# |
|
||||
# v
|
||||
# ---> <----
|
||||
# ^
|
||||
# |
|
||||
#
|
||||
# stagingdir = instance.data.get("stagingDir")
|
||||
# assert stagingdir, ("Incomplete instance \"%s\": "
|
||||
# "Missing reference to staging area." % instance)
|
||||
|
||||
# extra check if stagingDir actually exists and is available
|
||||
|
||||
# self.log.debug("Establishing staging directory @ %s" % stagingdir)
|
||||
|
||||
project = io.find_one({"type": "project"})
|
||||
|
||||
asset = io.find_one({"type": "asset",
|
||||
"name": ASSET,
|
||||
"parent": project["_id"]})
|
||||
|
||||
assert all([project, asset]), ("Could not find current project or "
|
||||
"asset '%s'" % ASSET)
|
||||
|
||||
subset = self.get_subset(asset, instance)
|
||||
|
||||
# get next version
|
||||
latest_version = io.find_one({"type": "version",
|
||||
"parent": subset["_id"]},
|
||||
{"name": True},
|
||||
sort=[("name", -1)])
|
||||
|
||||
next_version = 1
|
||||
if latest_version is not None:
|
||||
next_version += latest_version["name"]
|
||||
|
||||
self.log.info("Verifying version from assumed destination")
|
||||
|
||||
# assumed_data = instance.data["assumedTemplateData"]
|
||||
# assumed_version = assumed_data["version"]
|
||||
# if assumed_version != next_version:
|
||||
# raise AttributeError("Assumed version 'v{0:03d}' does not match"
|
||||
# "next version in database "
|
||||
# "('v{1:03d}')".format(assumed_version,
|
||||
# next_version))
|
||||
|
||||
if instance.data.get('version'):
|
||||
next_version = int(instance.data.get('version'))
|
||||
|
||||
instance.data['version'] = next_version
|
||||
|
||||
self.log.debug("Next version: v{0:03d}".format(next_version))
|
||||
|
||||
version_data = self.create_version_data(context, instance)
|
||||
version = self.create_version(subset=subset,
|
||||
version_number=next_version,
|
||||
locations=[LOCATION],
|
||||
data=version_data)
|
||||
|
||||
self.log.debug("Creating version ...")
|
||||
version_id = io.insert_one(version).inserted_id
|
||||
|
||||
# Write to disk
|
||||
# _
|
||||
# | |
|
||||
# _| |_
|
||||
# ____\ /
|
||||
# |\ \ / \
|
||||
# \ \ v \
|
||||
# \ \________.
|
||||
# \|________|
|
||||
#
|
||||
root = api.registered_root()
|
||||
hierarchy = ""
|
||||
parents = io.find_one({"type": 'asset', "name": ASSET})[
|
||||
'data']['parents']
|
||||
if parents and len(parents) > 0:
|
||||
# hierarchy = os.path.sep.join(hierarchy)
|
||||
hierarchy = os.path.join(*parents)
|
||||
|
||||
template_data = {"root": root,
|
||||
"project": {"name": PROJECT,
|
||||
"code": project['data']['code']},
|
||||
"silo": asset['silo'],
|
||||
"task": api.Session["AVALON_TASK"],
|
||||
"asset": ASSET,
|
||||
"family": instance.data['family'],
|
||||
"subset": subset["name"],
|
||||
"version": int(version["name"]),
|
||||
"hierarchy": hierarchy}
|
||||
|
||||
# template_publish = project["config"]["template"]["publish"]
|
||||
anatomy = instance.context.data['anatomy']
|
||||
|
||||
# Find the representations to transfer amongst the files
|
||||
# Each should be a single representation (as such, a single extension)
|
||||
representations = []
|
||||
destination_list = []
|
||||
|
||||
if 'transfers' not in instance.data:
|
||||
instance.data['transfers'] = []
|
||||
|
||||
# for repre in instance.data["representations"]:
|
||||
for idx, repre in enumerate(instance.data["representations"]):
|
||||
# Collection
|
||||
# _______
|
||||
# |______|\
|
||||
# | |\|
|
||||
# | ||
|
||||
# | ||
|
||||
# | ||
|
||||
# |_______|
|
||||
#
|
||||
|
||||
files = repre['files']
|
||||
|
||||
if len(files) > 1:
|
||||
|
||||
src_collections, remainder = clique.assemble(files)
|
||||
self.log.debug("dst_collections: {}".format(str(src_collections)))
|
||||
src_collection = src_collections[0]
|
||||
# Assert that each member has identical suffix
|
||||
src_head = src_collection.format("{head}")
|
||||
src_tail = ext = src_collection.format("{tail}")
|
||||
|
||||
test_dest_files = list()
|
||||
for i in [1, 2]:
|
||||
template_data["representation"] = repre['representation']
|
||||
template_data["frame"] = src_collection.format(
|
||||
"{padding}") % i
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
test_dest_files.append(anatomy_filled["render"]["path"])
|
||||
|
||||
dst_collections, remainder = clique.assemble(test_dest_files)
|
||||
dst_collection = dst_collections[0]
|
||||
dst_head = dst_collection.format("{head}")
|
||||
dst_tail = dst_collection.format("{tail}")
|
||||
|
||||
instance.data["representations"][idx]['published_path'] = dst_collection.format()
|
||||
|
||||
for i in src_collection.indexes:
|
||||
src_padding = src_collection.format("{padding}") % i
|
||||
src_file_name = "{0}{1}{2}".format(
|
||||
src_head, src_padding, src_tail)
|
||||
dst_padding = dst_collection.format("{padding}") % i
|
||||
dst = "{0}{1}{2}".format(dst_head, dst_padding, dst_tail)
|
||||
|
||||
# src = os.path.join(stagingdir, src_file_name)
|
||||
src = src_file_name
|
||||
self.log.debug("source: {}".format(src))
|
||||
|
||||
instance.data["transfers"].append([src, dst])
|
||||
|
||||
else:
|
||||
# Single file
|
||||
# _______
|
||||
# | |\
|
||||
# | |
|
||||
# | |
|
||||
# | |
|
||||
# |_______|
|
||||
#
|
||||
|
||||
template_data.pop("frame", None)
|
||||
|
||||
fname = files[0]
|
||||
|
||||
self.log.info("fname: {}".format(fname))
|
||||
|
||||
# assert not os.path.isabs(fname), (
|
||||
# "Given file name is a full path"
|
||||
# )
|
||||
# _, ext = os.path.splitext(fname)
|
||||
|
||||
template_data["representation"] = repre['representation']
|
||||
|
||||
# src = os.path.join(stagingdir, fname)
|
||||
src = src_file_name
|
||||
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
dst = anatomy_filled["render"]["path"]
|
||||
|
||||
instance.data["transfers"].append([src, dst])
|
||||
instance.data["representations"][idx]['published_path'] = dst
|
||||
|
||||
if repre['ext'] not in ["jpeg", "jpg", "mov", "mp4", "wav"]:
|
||||
template_data["frame"] = "#" * int(anatomy_filled["render"]["padding"])
|
||||
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
path_to_save = anatomy_filled["render"]["path"]
|
||||
template = anatomy.templates["render"]["path"]
|
||||
|
||||
self.log.debug("path_to_save: {}".format(path_to_save))
|
||||
|
||||
representation = {
|
||||
"schema": "pype:representation-2.0",
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": repre['representation'],
|
||||
"data": {'path': path_to_save, 'template': template},
|
||||
"dependencies": instance.data.get("dependencies", "").split(),
|
||||
|
||||
# Imprint shortcut to context
|
||||
# for performance reasons.
|
||||
"context": {
|
||||
"root": root,
|
||||
"project": {
|
||||
"name": PROJECT,
|
||||
"code": project['data']['code']
|
||||
},
|
||||
"task": api.Session["AVALON_TASK"],
|
||||
"silo": asset['silo'],
|
||||
"asset": ASSET,
|
||||
"family": instance.data['family'],
|
||||
"subset": subset["name"],
|
||||
"version": int(version["name"]),
|
||||
"hierarchy": hierarchy,
|
||||
"representation": repre['representation']
|
||||
}
|
||||
}
|
||||
|
||||
destination_list.append(dst)
|
||||
instance.data['destination_list'] = destination_list
|
||||
representations.append(representation)
|
||||
|
||||
self.log.info("Registering {} items".format(len(representations)))
|
||||
io.insert_many(representations)
|
||||
|
||||
def integrate(self, instance):
|
||||
"""Move the files
|
||||
|
||||
Through `instance.data["transfers"]`
|
||||
|
||||
Args:
|
||||
instance: the instance to integrate
|
||||
"""
|
||||
|
||||
transfers = instance.data["transfers"]
|
||||
|
||||
for src, dest in transfers:
|
||||
src = os.path.normpath(src)
|
||||
dest = os.path.normpath(dest)
|
||||
if src in dest:
|
||||
continue
|
||||
|
||||
self.log.info("Copying file .. {} -> {}".format(src, dest))
|
||||
self.copy_file(src, dest)
|
||||
|
||||
def copy_file(self, src, dst):
|
||||
""" Copy given source to destination
|
||||
|
||||
Arguments:
|
||||
src (str): the source file which needs to be copied
|
||||
dst (str): the destination of the sourc file
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
dirname = os.path.dirname(dst)
|
||||
try:
|
||||
os.makedirs(dirname)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
pass
|
||||
else:
|
||||
self.log.critical("An unexpected error occurred.")
|
||||
raise
|
||||
|
||||
shutil.copy(src, dst)
|
||||
|
||||
def get_subset(self, asset, instance):
|
||||
|
||||
subset = io.find_one({"type": "subset",
|
||||
"parent": asset["_id"],
|
||||
"name": instance.data["subset"]})
|
||||
|
||||
if subset is None:
|
||||
subset_name = instance.data["subset"]
|
||||
self.log.info("Subset '%s' not found, creating.." % subset_name)
|
||||
|
||||
_id = io.insert_one({
|
||||
"schema": "pype:subset-2.0",
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"data": {},
|
||||
"parent": asset["_id"]
|
||||
}).inserted_id
|
||||
|
||||
subset = io.find_one({"_id": _id})
|
||||
|
||||
return subset
|
||||
|
||||
def create_version(self, subset, version_number, locations, data=None):
|
||||
""" Copy given source to destination
|
||||
|
||||
Args:
|
||||
subset (dict): the registered subset of the asset
|
||||
version_number (int): the version number
|
||||
locations (list): the currently registered locations
|
||||
|
||||
Returns:
|
||||
dict: collection of data to create a version
|
||||
"""
|
||||
# Imprint currently registered location
|
||||
version_locations = [location for location in locations if
|
||||
location is not None]
|
||||
|
||||
return {"schema": "pype:version-2.0",
|
||||
"type": "version",
|
||||
"parent": subset["_id"],
|
||||
"name": version_number,
|
||||
"locations": version_locations,
|
||||
"data": data}
|
||||
|
||||
def create_version_data(self, context, instance):
|
||||
"""Create the data collection for the version
|
||||
|
||||
Args:
|
||||
context: the current context
|
||||
instance: the current instance being published
|
||||
|
||||
Returns:
|
||||
dict: the required information with instance.data as key
|
||||
"""
|
||||
|
||||
families = []
|
||||
current_families = instance.data.get("families", list())
|
||||
instance_family = instance.data.get("family", None)
|
||||
|
||||
if instance_family is not None:
|
||||
families.append(instance_family)
|
||||
families += current_families
|
||||
|
||||
# try:
|
||||
# source = instance.data['source']
|
||||
# except KeyError:
|
||||
# source = context.data["currentFile"]
|
||||
#
|
||||
# relative_path = os.path.relpath(source, api.registered_root())
|
||||
# source = os.path.join("{root}", relative_path).replace("\\", "/")
|
||||
|
||||
source = "standalone"
|
||||
|
||||
version_data = {"families": families,
|
||||
"time": context.data["time"],
|
||||
"author": context.data["user"],
|
||||
"source": source,
|
||||
"comment": context.data.get("comment")}
|
||||
|
||||
# Include optional data if present in
|
||||
optionals = ["startFrame", "endFrame", "step",
|
||||
"handles", "colorspace", "fps", "outputDir"]
|
||||
|
||||
for key in optionals:
|
||||
if key in instance.data:
|
||||
version_data[key] = instance.data.get(key, None)
|
||||
|
||||
return version_data
|
||||
238
pype/scripts/otio_burnin.py
Normal file
238
pype/scripts/otio_burnin.py
Normal file
|
|
@ -0,0 +1,238 @@
|
|||
import os
|
||||
import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins
|
||||
from pypeapp.lib import config
|
||||
from pype import api as pype
|
||||
# FFmpeg in PATH is required
|
||||
|
||||
|
||||
log = pype.Logger().get_logger("BurninWrapper", "burninwrap")
|
||||
|
||||
|
||||
class ModifiedBurnins(ffmpeg_burnins.Burnins):
|
||||
TOP_CENTERED = ffmpeg_burnins.TOP_CENTERED
|
||||
BOTTOM_CENTERED = ffmpeg_burnins.BOTTOM_CENTERED
|
||||
TOP_LEFT = ffmpeg_burnins.TOP_LEFT
|
||||
BOTTOM_LEFT = ffmpeg_burnins.BOTTOM_LEFT
|
||||
TOP_RIGHT = ffmpeg_burnins.TOP_RIGHT
|
||||
BOTTOM_RIGHT = ffmpeg_burnins.BOTTOM_RIGHT
|
||||
|
||||
options_init = {
|
||||
'opacity': 1,
|
||||
'x_offset': 5,
|
||||
'y_offset': 5,
|
||||
'bg_padding': 5,
|
||||
'bg_opacity': 0.5,
|
||||
'font_size': 42
|
||||
}
|
||||
|
||||
def __init__(self, source, streams=None, options_init=None):
|
||||
super().__init__(source, streams)
|
||||
if options_init:
|
||||
self.options_init.update(options_init)
|
||||
|
||||
def add_text(self, text, align, options=None):
|
||||
"""
|
||||
Adding static text to a filter.
|
||||
|
||||
:param str text: text to apply to the drawtext
|
||||
:param enum align: alignment, must use provided enum flags
|
||||
:param dict options: recommended to use TextOptions
|
||||
"""
|
||||
if not options:
|
||||
options = ffmpeg_burnins.TextOptions(**self.options_init)
|
||||
self._add_burnin(text, align, options, ffmpeg_burnins.DRAWTEXT)
|
||||
|
||||
def add_frame_numbers(self, align, options=None, start_frame=None):
|
||||
"""
|
||||
Convenience method to create the frame number expression.
|
||||
|
||||
:param enum align: alignment, must use provided enum flags
|
||||
:param dict options: recommended to use FrameNumberOptions
|
||||
"""
|
||||
if not options:
|
||||
options = ffmpeg_burnins.FrameNumberOptions(**self.options_init)
|
||||
if start_frame:
|
||||
options['frame_offset'] = start_frame
|
||||
|
||||
options['expression'] = r'%%{eif\:n+%d\:d}' % options['frame_offset']
|
||||
text = str(int(self.end_frame + options['frame_offset']))
|
||||
self._add_burnin(text, align, options, ffmpeg_burnins.DRAWTEXT)
|
||||
|
||||
def add_timecode(self, align, options=None, start_frame=None):
|
||||
"""
|
||||
Convenience method to create the frame number expression.
|
||||
|
||||
:param enum align: alignment, must use provided enum flags
|
||||
:param dict options: recommended to use TimeCodeOptions
|
||||
"""
|
||||
if not options:
|
||||
options = ffmpeg_burnins.TimeCodeOptions(**self.options_init)
|
||||
if start_frame:
|
||||
options['frame_offset'] = start_frame
|
||||
|
||||
timecode = ffmpeg_burnins._frames_to_timecode(
|
||||
options['frame_offset'],
|
||||
self.frame_rate
|
||||
)
|
||||
options = options.copy()
|
||||
if not options.get('fps'):
|
||||
options['fps'] = self.frame_rate
|
||||
|
||||
self._add_burnin(
|
||||
timecode.replace(':', r'\:'),
|
||||
align,
|
||||
options,
|
||||
ffmpeg_burnins.TIMECODE
|
||||
)
|
||||
|
||||
def _add_burnin(self, text, align, options, draw):
|
||||
"""
|
||||
Generic method for building the filter flags.
|
||||
:param str text: text to apply to the drawtext
|
||||
:param enum align: alignment, must use provided enum flags
|
||||
:param dict options:
|
||||
"""
|
||||
resolution = self.resolution
|
||||
data = {
|
||||
'text': options.get('expression') or text,
|
||||
'color': options['font_color'],
|
||||
'size': options['font_size']
|
||||
}
|
||||
data.update(options)
|
||||
data.update(ffmpeg_burnins._drawtext(align, resolution, text, options))
|
||||
if 'font' in data and ffmpeg_burnins._is_windows():
|
||||
data['font'] = data['font'].replace(os.sep, r'\\' + os.sep)
|
||||
data['font'] = data['font'].replace(':', r'\:')
|
||||
self.filters['drawtext'].append(draw % data)
|
||||
|
||||
if options.get('bg_color') is not None:
|
||||
box = ffmpeg_burnins.BOX % {
|
||||
'border': options['bg_padding'],
|
||||
'color': options['bg_color'],
|
||||
'opacity': options['bg_opacity']
|
||||
}
|
||||
self.filters['drawtext'][-1] += ':%s' % box
|
||||
|
||||
def command(self, output=None, args=None, overwrite=False):
|
||||
"""
|
||||
Generate the entire FFMPEG command.
|
||||
|
||||
:param str output: output file
|
||||
:param str args: additional FFMPEG arguments
|
||||
:param bool overwrite: overwrite the output if it exists
|
||||
:returns: completed command
|
||||
:rtype: str
|
||||
"""
|
||||
output = output or ''
|
||||
if overwrite:
|
||||
output = '-y {}'.format(output)
|
||||
|
||||
filters = ''
|
||||
if self.filter_string:
|
||||
filters = '-vf "{}"'.format(self.filter_string)
|
||||
|
||||
return (ffmpeg_burnins.FFMPEG % {
|
||||
'input': self.source,
|
||||
'output': output,
|
||||
'args': '%s ' % args if args else '',
|
||||
'filters': filters
|
||||
}).strip()
|
||||
|
||||
|
||||
def example(input_path, output_path):
|
||||
options_init = {
|
||||
'opacity': 1,
|
||||
'x_offset': 10,
|
||||
'y_offset': 10,
|
||||
'bg_padding': 10,
|
||||
'bg_opacity': 0.5,
|
||||
'font_size': 52
|
||||
}
|
||||
# First frame in burnin
|
||||
start_frame = 2000
|
||||
# Options init sets burnin look
|
||||
burnin = ModifiedBurnins(input_path, options_init=options_init)
|
||||
# Static text
|
||||
burnin.add_text('My Text', ModifiedBurnins.TOP_CENTERED)
|
||||
# Frame number
|
||||
burnin.add_frame_numbers(ModifiedBurnins.TOP_RIGHT, start_frame=start_frame)
|
||||
# Timecode
|
||||
burnin.add_timecode(ModifiedBurnins.TOP_LEFT, start_frame=start_frame)
|
||||
# Start render (overwrite output file if exist)
|
||||
burnin.render(output_path, overwrite=True)
|
||||
|
||||
|
||||
def example_with_presets(input_path, output_path, data):
|
||||
presets = config.get_presets().get('tools', {}).get('burnins', {})
|
||||
options_init = presets.get('options')
|
||||
|
||||
burnin = ModifiedBurnins(input_path, options_init=options_init)
|
||||
|
||||
start_frame = data.get("start_frame")
|
||||
for align_text, preset in presets.get('burnins', {}).items():
|
||||
align = None
|
||||
if align_text == 'TOP_LEFT':
|
||||
align = ModifiedBurnins.TOP_LEFT
|
||||
elif align_text == 'TOP_CENTERED':
|
||||
align = ModifiedBurnins.TOP_CENTERED
|
||||
elif align_text == 'TOP_RIGHT':
|
||||
align = ModifiedBurnins.TOP_RIGHT
|
||||
elif align_text == 'BOTTOM_LEFT':
|
||||
align = ModifiedBurnins.BOTTOM_LEFT
|
||||
elif align_text == 'BOTTOM_CENTERED':
|
||||
align = ModifiedBurnins.BOTTOM_CENTERED
|
||||
elif align_text == 'BOTTOM_RIGHT':
|
||||
align = ModifiedBurnins.BOTTOM_RIGHT
|
||||
|
||||
bi_func = preset.get('function')
|
||||
if not bi_func:
|
||||
log.error(
|
||||
'Missing function for burnin!'
|
||||
'Burnins are not created!'
|
||||
)
|
||||
return
|
||||
|
||||
if (
|
||||
bi_func in ['frame_numbers', 'timecode'] and
|
||||
not start_frame
|
||||
):
|
||||
log.error(
|
||||
'start_frame is not set in entered data!'
|
||||
'Burnins are not created!'
|
||||
)
|
||||
return
|
||||
|
||||
if bi_func == 'frame_numbers':
|
||||
burnin.add_frame_numbers(align, start_frame=start_frame)
|
||||
elif bi_func == 'timecode':
|
||||
burnin.add_timecode(align, start_frame=start_frame)
|
||||
elif: bi_func == 'text':
|
||||
if not preset.get('text'):
|
||||
log.error('Text is not set for text function burnin!')
|
||||
return
|
||||
text = preset['text'].format(**data)
|
||||
burnin.add_text(text, align)
|
||||
else:
|
||||
log.error(
|
||||
'Unknown function for burnins {}'.format(bi_func)
|
||||
)
|
||||
return
|
||||
|
||||
burnin.render(output_path, overwrite=True)
|
||||
|
||||
|
||||
'''
|
||||
# TODO: implement image sequence
|
||||
# Changes so OpenTimelineIo burnins is possible to render from image sequence.
|
||||
#
|
||||
# before input:
|
||||
# # -start_number is number of first frame / -r is fps
|
||||
# -start_number 375 -r 25
|
||||
# before output:
|
||||
# # -c: set output codec (h264, ...)
|
||||
# -c:v libx264
|
||||
#
|
||||
#
|
||||
# ffmpeg -loglevel panic -i image_sequence -vf "drawtext=text='Test':x=w/2-tw/2:y=0:fontcolor=white@1.0:fontsize=42:fontfile='C\:\\\WINDOWS\\\Fonts\\\arial.ttf':box=1:boxborderw=5:boxcolor=black@0.5,drawtext=text='%{eif\:n+1001\:d}':x=0:y=0:fontcolor=white@1.0:fontsize=42:fontfile='C\:\\\WINDOWS\\\Fonts\\\arial.ttf':box=1:boxborderw=5:boxcolor=black@0.5" C:\Users\jakub.trllo\Desktop\Tests\files\mov\render\test_output.mov'
|
||||
# ffmpeg -loglevel panic -start_number 375 -r 25 -i "C:\Users\jakub.trllo\Desktop\Tests\files\exr\int_c022_lighting_v001_main_AO.%04d.exr" -vf "drawtext=text='Test':x=w/2-tw/2:y=0:fontcolor=white@1.0:fontsize=42:fontfile='C\:\\\WINDOWS\\\Fonts\\\arial.ttf':box=1:boxborderw=5:boxcolor=black@0.5,drawtext=text='%{eif\:n+1001\:d}':x=0:y=0:fontcolor=white@1.0:fontsize=42:fontfile='C\:\\\WINDOWS\\\Fonts\\\arial.ttf':box=1:boxborderw=5:boxcolor=black@0.5,colormatrix=bt601:bt709" -c:v libx264 "output_path.mov"
|
||||
'''
|
||||
|
|
@ -16,21 +16,21 @@ import pyblish.api
|
|||
|
||||
|
||||
# Registers Global pyblish plugins
|
||||
# pype.install()
|
||||
pype.install()
|
||||
# Registers Standalone pyblish plugins
|
||||
PUBLISH_PATH = os.path.sep.join(
|
||||
[pype.PLUGINS_DIR, 'standalonepublish', 'publish']
|
||||
)
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
|
||||
# # Registers Standalone pyblish plugins
|
||||
# PUBLISH_PATH = os.path.sep.join(
|
||||
# [pype.PLUGINS_DIR, 'ftrack', 'publish']
|
||||
# [pype.PLUGINS_DIR, 'standalonepublish', 'publish']
|
||||
# )
|
||||
# pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
|
||||
# Registers Standalone pyblish plugins
|
||||
PUBLISH_PATH = os.path.sep.join(
|
||||
[pype.PLUGINS_DIR, 'ftrack', 'publish']
|
||||
)
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
|
||||
def set_context(project, asset, app):
|
||||
|
||||
def set_context(project, asset, task, app):
|
||||
''' Sets context for pyblish (must be done before pyblish is launched)
|
||||
:param project: Name of `Project` where instance should be published
|
||||
:type project: str
|
||||
|
|
@ -41,6 +41,11 @@ def set_context(project, asset, app):
|
|||
io.Session["AVALON_PROJECT"] = project
|
||||
os.environ["AVALON_ASSET"] = asset
|
||||
io.Session["AVALON_ASSET"] = asset
|
||||
if not task:
|
||||
task = ''
|
||||
os.environ["AVALON_TASK"] = task
|
||||
io.Session["AVALON_TASK"] = task
|
||||
|
||||
|
||||
io.install()
|
||||
|
||||
|
|
|
|||
|
|
@ -117,7 +117,7 @@ class ComponentsWidget(QtWidgets.QWidget):
|
|||
try:
|
||||
data = self.parent_widget.collect_data()
|
||||
publish.set_context(
|
||||
data['project'], data['asset'], 'standalonepublish'
|
||||
data['project'], data['asset'], data['task'], 'standalonepublish'
|
||||
)
|
||||
result = publish.publish(data)
|
||||
# Clear widgets from components list if publishing was successful
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue