diff --git a/.github/weekly-digest.yml b/.github/weekly-digest.yml
deleted file mode 100644
index fe502fbc98..0000000000
--- a/.github/weekly-digest.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-# Configuration for weekly-digest - https://github.com/apps/weekly-digest
-publishDay: sun
-canPublishIssues: true
-canPublishPullRequests: true
-canPublishContributors: true
-canPublishStargazers: true
-canPublishCommits: true
diff --git a/.gitignore b/.gitignore
index ba8805e013..ebb47e55d2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -64,7 +64,6 @@ coverage.xml
.hypothesis/
.pytest_cache/
-
# Node JS packages
##################
node_modules
diff --git a/README.md b/README.md
index 73620d7885..566e226538 100644
--- a/README.md
+++ b/README.md
@@ -2,6 +2,10 @@
OpenPype
====
+[](https://github.com/pypeclub/pype/actions/workflows/documentation.yml)  
+
+
+
Introduction
------------
@@ -61,7 +65,8 @@ git clone --recurse-submodules git@github.com:Pypeclub/OpenPype.git
#### To build OpenPype:
1) Run `.\tools\create_env.ps1` to create virtual environment in `.\venv`
-2) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\`
+2) Run `.\tools\fetch_thirdparty_libs.ps1` to download third-party dependencies like ffmpeg and oiio. Those will be included in build.
+3) Run `.\tools\build.ps1` to build OpenPype executables in `.\build\`
To create distributable OpenPype versions, run `./tools/create_zip.ps1` - that will
create zip file with name `openpype-vx.x.x.zip` parsed from current OpenPype repository and
@@ -116,8 +121,8 @@ pyenv local 3.7.9
#### To build OpenPype:
1) Run `.\tools\create_env.sh` to create virtual environment in `.\venv`
-2) Run `.\tools\build.sh` to build OpenPype executables in `.\build\`
-
+2) Run `.\tools\fetch_thirdparty_libs.sh` to download third-party dependencies like ffmpeg and oiio. Those will be included in build.
+3) Run `.\tools\build.sh` to build OpenPype executables in `.\build\`
### Linux
diff --git a/igniter/Poppins/OFL.txt b/igniter/Poppins/OFL.txt
new file mode 100644
index 0000000000..76df3b5656
--- /dev/null
+++ b/igniter/Poppins/OFL.txt
@@ -0,0 +1,93 @@
+Copyright 2020 The Poppins Project Authors (https://github.com/itfoundry/Poppins)
+
+This Font Software is licensed under the SIL Open Font License, Version 1.1.
+This license is copied below, and is also available with a FAQ at:
+http://scripts.sil.org/OFL
+
+
+-----------------------------------------------------------
+SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+-----------------------------------------------------------
+
+PREAMBLE
+The goals of the Open Font License (OFL) are to stimulate worldwide
+development of collaborative font projects, to support the font creation
+efforts of academic and linguistic communities, and to provide a free and
+open framework in which fonts may be shared and improved in partnership
+with others.
+
+The OFL allows the licensed fonts to be used, studied, modified and
+redistributed freely as long as they are not sold by themselves. The
+fonts, including any derivative works, can be bundled, embedded,
+redistributed and/or sold with any software provided that any reserved
+names are not used by derivative works. The fonts and derivatives,
+however, cannot be released under any other type of license. The
+requirement for fonts to remain under this license does not apply
+to any document created using the fonts or their derivatives.
+
+DEFINITIONS
+"Font Software" refers to the set of files released by the Copyright
+Holder(s) under this license and clearly marked as such. This may
+include source files, build scripts and documentation.
+
+"Reserved Font Name" refers to any names specified as such after the
+copyright statement(s).
+
+"Original Version" refers to the collection of Font Software components as
+distributed by the Copyright Holder(s).
+
+"Modified Version" refers to any derivative made by adding to, deleting,
+or substituting -- in part or in whole -- any of the components of the
+Original Version, by changing formats or by porting the Font Software to a
+new environment.
+
+"Author" refers to any designer, engineer, programmer, technical
+writer or other person who contributed to the Font Software.
+
+PERMISSION & CONDITIONS
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of the Font Software, to use, study, copy, merge, embed, modify,
+redistribute, and sell modified and unmodified copies of the Font
+Software, subject to the following conditions:
+
+1) Neither the Font Software nor any of its individual components,
+in Original or Modified Versions, may be sold by itself.
+
+2) Original or Modified Versions of the Font Software may be bundled,
+redistributed and/or sold with any software, provided that each copy
+contains the above copyright notice and this license. These can be
+included either as stand-alone text files, human-readable headers or
+in the appropriate machine-readable metadata fields within text or
+binary files as long as those fields can be easily viewed by the user.
+
+3) No Modified Version of the Font Software may use the Reserved Font
+Name(s) unless explicit written permission is granted by the corresponding
+Copyright Holder. This restriction only applies to the primary font name as
+presented to the users.
+
+4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
+Software shall not be used to promote, endorse or advertise any
+Modified Version, except to acknowledge the contribution(s) of the
+Copyright Holder(s) and the Author(s) or with their explicit written
+permission.
+
+5) The Font Software, modified or unmodified, in part or in whole,
+must be distributed entirely under this license, and must not be
+distributed under any other license. The requirement for fonts to
+remain under this license does not apply to any document created
+using the Font Software.
+
+TERMINATION
+This license becomes null and void if any of the above conditions are
+not met.
+
+DISCLAIMER
+THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
+OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
+COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
+DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
+OTHER DEALINGS IN THE FONT SOFTWARE.
diff --git a/igniter/Poppins/Poppins-Black.ttf b/igniter/Poppins/Poppins-Black.ttf
new file mode 100644
index 0000000000..a9520b78ac
Binary files /dev/null and b/igniter/Poppins/Poppins-Black.ttf differ
diff --git a/igniter/Poppins/Poppins-BlackItalic.ttf b/igniter/Poppins/Poppins-BlackItalic.ttf
new file mode 100644
index 0000000000..ebfdd707e5
Binary files /dev/null and b/igniter/Poppins/Poppins-BlackItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-Bold.ttf b/igniter/Poppins/Poppins-Bold.ttf
new file mode 100644
index 0000000000..b94d47f3af
Binary files /dev/null and b/igniter/Poppins/Poppins-Bold.ttf differ
diff --git a/igniter/Poppins/Poppins-BoldItalic.ttf b/igniter/Poppins/Poppins-BoldItalic.ttf
new file mode 100644
index 0000000000..e2e64456c7
Binary files /dev/null and b/igniter/Poppins/Poppins-BoldItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-ExtraBold.ttf b/igniter/Poppins/Poppins-ExtraBold.ttf
new file mode 100644
index 0000000000..8f008c3684
Binary files /dev/null and b/igniter/Poppins/Poppins-ExtraBold.ttf differ
diff --git a/igniter/Poppins/Poppins-ExtraBoldItalic.ttf b/igniter/Poppins/Poppins-ExtraBoldItalic.ttf
new file mode 100644
index 0000000000..b2a9bf557a
Binary files /dev/null and b/igniter/Poppins/Poppins-ExtraBoldItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-ExtraLight.ttf b/igniter/Poppins/Poppins-ExtraLight.ttf
new file mode 100644
index 0000000000..ee6238251f
Binary files /dev/null and b/igniter/Poppins/Poppins-ExtraLight.ttf differ
diff --git a/igniter/Poppins/Poppins-ExtraLightItalic.ttf b/igniter/Poppins/Poppins-ExtraLightItalic.ttf
new file mode 100644
index 0000000000..e392492abd
Binary files /dev/null and b/igniter/Poppins/Poppins-ExtraLightItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-Italic.ttf b/igniter/Poppins/Poppins-Italic.ttf
new file mode 100644
index 0000000000..46203996d3
Binary files /dev/null and b/igniter/Poppins/Poppins-Italic.ttf differ
diff --git a/igniter/Poppins/Poppins-Light.ttf b/igniter/Poppins/Poppins-Light.ttf
new file mode 100644
index 0000000000..2ab022196b
Binary files /dev/null and b/igniter/Poppins/Poppins-Light.ttf differ
diff --git a/igniter/Poppins/Poppins-LightItalic.ttf b/igniter/Poppins/Poppins-LightItalic.ttf
new file mode 100644
index 0000000000..6f9279daef
Binary files /dev/null and b/igniter/Poppins/Poppins-LightItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-Medium.ttf b/igniter/Poppins/Poppins-Medium.ttf
new file mode 100644
index 0000000000..e90e87ed69
Binary files /dev/null and b/igniter/Poppins/Poppins-Medium.ttf differ
diff --git a/igniter/Poppins/Poppins-MediumItalic.ttf b/igniter/Poppins/Poppins-MediumItalic.ttf
new file mode 100644
index 0000000000..d8a251c7c4
Binary files /dev/null and b/igniter/Poppins/Poppins-MediumItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-Regular.ttf b/igniter/Poppins/Poppins-Regular.ttf
new file mode 100644
index 0000000000..be06e7fdca
Binary files /dev/null and b/igniter/Poppins/Poppins-Regular.ttf differ
diff --git a/igniter/Poppins/Poppins-SemiBold.ttf b/igniter/Poppins/Poppins-SemiBold.ttf
new file mode 100644
index 0000000000..dabf7c242e
Binary files /dev/null and b/igniter/Poppins/Poppins-SemiBold.ttf differ
diff --git a/igniter/Poppins/Poppins-SemiBoldItalic.ttf b/igniter/Poppins/Poppins-SemiBoldItalic.ttf
new file mode 100644
index 0000000000..29d5f7419b
Binary files /dev/null and b/igniter/Poppins/Poppins-SemiBoldItalic.ttf differ
diff --git a/igniter/Poppins/Poppins-Thin.ttf b/igniter/Poppins/Poppins-Thin.ttf
new file mode 100644
index 0000000000..f5c0fdd531
Binary files /dev/null and b/igniter/Poppins/Poppins-Thin.ttf differ
diff --git a/igniter/Poppins/Poppins-ThinItalic.ttf b/igniter/Poppins/Poppins-ThinItalic.ttf
new file mode 100644
index 0000000000..b910089316
Binary files /dev/null and b/igniter/Poppins/Poppins-ThinItalic.ttf differ
diff --git a/igniter/__init__.py b/igniter/__init__.py
index c2442ad57f..20bf9be106 100644
--- a/igniter/__init__.py
+++ b/igniter/__init__.py
@@ -10,29 +10,22 @@ from .bootstrap_repos import BootstrapRepos
from .version import __version__ as version
-RESULT = 0
-
-
-def get_result(res: int):
- """Sets result returned from dialog."""
- global RESULT
- RESULT = res
-
-
def open_dialog():
"""Show Igniter dialog."""
- from Qt import QtWidgets
+ from Qt import QtWidgets, QtCore
from .install_dialog import InstallDialog
+ scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
+ if scale_attr is not None:
+ QtWidgets.QApplication.setAttribute(scale_attr)
+
app = QtWidgets.QApplication(sys.argv)
d = InstallDialog()
- d.finished.connect(get_result)
d.open()
- app.exec()
-
- return RESULT
+ app.exec_()
+ return d.result()
__all__ = [
diff --git a/igniter/bootstrap_repos.py b/igniter/bootstrap_repos.py
index c03b30d59a..b4989fb9bf 100644
--- a/igniter/bootstrap_repos.py
+++ b/igniter/bootstrap_repos.py
@@ -286,7 +286,7 @@ class BootstrapRepos:
"""Get version of local OpenPype."""
version = {}
- path = Path(os.path.dirname(__file__)).parent / "openpype" / "version.py"
+ path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
with open(path, "r") as fp:
exec(fp.read(), version)
return version["__version__"]
diff --git a/igniter/install_dialog.py b/igniter/install_dialog.py
index 27b2d1fe37..e6439b5129 100644
--- a/igniter/install_dialog.py
+++ b/igniter/install_dialog.py
@@ -2,14 +2,15 @@
"""Show dialog for choosing central pype repository."""
import os
import sys
+import re
+import collections
from Qt import QtCore, QtGui, QtWidgets # noqa
from Qt.QtGui import QValidator # noqa
from Qt.QtCore import QTimer # noqa
-from .install_thread import InstallThread, InstallResult
+from .install_thread import InstallThread
from .tools import (
- validate_path_string,
validate_mongo_connection,
get_openpype_path_from_db
)
@@ -17,504 +18,480 @@ from .user_settings import OpenPypeSecureRegistry
from .version import __version__
-class FocusHandlingLineEdit(QtWidgets.QLineEdit):
- """Handling focus in/out on QLineEdit."""
- focusIn = QtCore.Signal()
- focusOut = QtCore.Signal()
+def load_stylesheet():
+ stylesheet_path = os.path.join(
+ os.path.dirname(__file__),
+ "stylesheet.css"
+ )
+ with open(stylesheet_path, "r") as file_stream:
+ stylesheet = file_stream.read()
- def focusOutEvent(self, event): # noqa
- """For emitting signal on focus out."""
- self.focusOut.emit()
- super().focusOutEvent(event)
+ return stylesheet
- def focusInEvent(self, event): # noqa
- """For emitting signal on focus in."""
- self.focusIn.emit()
- super().focusInEvent(event)
+
+class ButtonWithOptions(QtWidgets.QFrame):
+ option_clicked = QtCore.Signal(str)
+
+ def __init__(self, commands, parent=None):
+ super(ButtonWithOptions, self).__init__(parent)
+
+ self.setObjectName("ButtonWithOptions")
+
+ options_btn = QtWidgets.QToolButton(self)
+ options_btn.setArrowType(QtCore.Qt.DownArrow)
+ options_btn.setIconSize(QtCore.QSize(12, 12))
+
+ default = None
+ default_label = None
+ options_menu = QtWidgets.QMenu(self)
+ for option, option_label in commands.items():
+ if default is None:
+ default = option
+ default_label = option_label
+ continue
+ action = QtWidgets.QAction(option_label, options_menu)
+ action.setData(option)
+ options_menu.addAction(action)
+
+ main_btn = QtWidgets.QPushButton(default_label, self)
+ main_btn.setFlat(True)
+
+ main_layout = QtWidgets.QHBoxLayout(self)
+ main_layout.setContentsMargins(0, 0, 0, 0)
+ main_layout.setSpacing(1)
+
+ main_layout.addWidget(main_btn, 1, QtCore.Qt.AlignVCenter)
+ main_layout.addWidget(options_btn, 0, QtCore.Qt.AlignVCenter)
+
+ main_btn.clicked.connect(self._on_main_button)
+ options_btn.clicked.connect(self._on_options_click)
+ options_menu.triggered.connect(self._on_trigger)
+
+ self.main_btn = main_btn
+ self.options_btn = options_btn
+ self.options_menu = options_menu
+
+ options_btn.setEnabled(not options_menu.isEmpty())
+
+ self._default_value = default
+
+ def resizeEvent(self, event):
+ super(ButtonWithOptions, self).resizeEvent(event)
+ self.options_btn.setFixedHeight(self.main_btn.height())
+
+ def _on_options_click(self):
+ pos = self.main_btn.rect().bottomLeft()
+ point = self.main_btn.mapToGlobal(pos)
+ self.options_menu.popup(point)
+
+ def _on_trigger(self, action):
+ self.option_clicked.emit(action.data())
+
+ def _on_main_button(self):
+ self.option_clicked.emit(self._default_value)
+
+
+class NiceProgressBar(QtWidgets.QProgressBar):
+ def __init__(self, parent=None):
+ super(NiceProgressBar, self).__init__(parent)
+ self._real_value = 0
+
+ def setValue(self, value):
+ self._real_value = value
+ if value != 0 and value < 11:
+ value = 11
+
+ super(NiceProgressBar, self).setValue(value)
+
+ def value(self):
+ return self._real_value
+
+ def text(self):
+ return "{} %".format(self._real_value)
+
+
+class ConsoleWidget(QtWidgets.QWidget):
+ def __init__(self, parent=None):
+ super(ConsoleWidget, self).__init__(parent)
+
+ # style for normal and error console text
+ default_console_style = QtGui.QTextCharFormat()
+ error_console_style = QtGui.QTextCharFormat()
+ default_console_style.setForeground(
+ QtGui.QColor.fromRgb(72, 200, 150)
+ )
+ error_console_style.setForeground(
+ QtGui.QColor.fromRgb(184, 54, 19)
+ )
+
+ label = QtWidgets.QLabel("Console:", self)
+
+ console_output = QtWidgets.QPlainTextEdit(self)
+ console_output.setMinimumSize(QtCore.QSize(300, 200))
+ console_output.setReadOnly(True)
+ console_output.setCurrentCharFormat(default_console_style)
+ console_output.setObjectName("Console")
+
+ main_layout = QtWidgets.QVBoxLayout(self)
+ main_layout.setContentsMargins(0, 0, 0, 0)
+ main_layout.addWidget(label, 0)
+ main_layout.addWidget(console_output, 1)
+
+ self.default_console_style = default_console_style
+ self.error_console_style = error_console_style
+
+ self.label = label
+ self.console_output = console_output
+
+ self.hide_console()
+
+ def hide_console(self):
+ self.label.setVisible(False)
+ self.console_output.setVisible(False)
+
+ self.updateGeometry()
+
+ def show_console(self):
+ self.label.setVisible(True)
+ self.console_output.setVisible(True)
+
+ self.updateGeometry()
+
+ def update_console(self, msg: str, error: bool = False) -> None:
+ if not error:
+ self.console_output.setCurrentCharFormat(
+ self.default_console_style
+ )
+ else:
+ self.console_output.setCurrentCharFormat(
+ self.error_console_style
+ )
+ self.console_output.appendPlainText(msg)
+
+
+class MongoUrlInput(QtWidgets.QLineEdit):
+ """Widget to input mongodb URL."""
+
+ def set_valid(self):
+ """Set valid state on mongo url input."""
+ self.setProperty("state", "valid")
+ self.style().polish(self)
+
+ def remove_state(self):
+ """Set invalid state on mongo url input."""
+ self.setProperty("state", "")
+ self.style().polish(self)
+
+ def set_invalid(self):
+ """Set invalid state on mongo url input."""
+ self.setProperty("state", "invalid")
+ self.style().polish(self)
class InstallDialog(QtWidgets.QDialog):
"""Main Igniter dialog window."""
- _size_w = 400
- _size_h = 600
- path = ""
- _controls_disabled = False
+
+ mongo_url_regex = re.compile(r"^(mongodb|mongodb\+srv)://.*?")
+
+ _width = 500
+ _height = 200
+ commands = collections.OrderedDict([
+ ("run", "Start"),
+ ("run_from_code", "Run from code")
+ ])
def __init__(self, parent=None):
super(InstallDialog, self).__init__(parent)
- self.secure_registry = OpenPypeSecureRegistry("mongodb")
- self.mongo_url = ""
+ self.setWindowTitle(
+ f"OpenPype Igniter {__version__}"
+ )
+ self.setWindowFlags(
+ QtCore.Qt.WindowCloseButtonHint
+ | QtCore.Qt.WindowMinimizeButtonHint
+ )
+
+ current_dir = os.path.dirname(os.path.abspath(__file__))
+ roboto_font_path = os.path.join(current_dir, "RobotoMono-Regular.ttf")
+ poppins_font_path = os.path.join(current_dir, "Poppins")
+ icon_path = os.path.join(current_dir, "openpype_icon.png")
+
+ # Install roboto font
+ QtGui.QFontDatabase.addApplicationFont(roboto_font_path)
+ for filename in os.listdir(poppins_font_path):
+ if os.path.splitext(filename)[1] == ".ttf":
+ QtGui.QFontDatabase.addApplicationFont(filename)
+
+ # Load logo
+ pixmap_openpype_logo = QtGui.QPixmap(icon_path)
+ # Set logo as icon of window
+ self.setWindowIcon(QtGui.QIcon(pixmap_openpype_logo))
+
+ secure_registry = OpenPypeSecureRegistry("mongodb")
+ mongo_url = ""
try:
- self.mongo_url = (
+ mongo_url = (
os.getenv("OPENPYPE_MONGO", "")
- or self.secure_registry.get_item("openPypeMongo")
+ or secure_registry.get_item("openPypeMongo")
)
except ValueError:
pass
- self.setWindowTitle(
- f"OpenPype Igniter {__version__} - OpenPype installation")
- self._icon_path = os.path.join(
- os.path.dirname(__file__), 'openpype_icon.png')
- icon = QtGui.QIcon(self._icon_path)
- self.setWindowIcon(icon)
- self.setWindowFlags(
- QtCore.Qt.WindowCloseButtonHint |
- QtCore.Qt.WindowMinimizeButtonHint
- )
+ self.mongo_url = mongo_url
+ self._pixmap_openpype_logo = pixmap_openpype_logo
- self.setMinimumSize(
- QtCore.QSize(self._size_w, self._size_h))
- self.setMaximumSize(
- QtCore.QSize(self._size_w + 100, self._size_h + 500))
-
- # style for normal console text
- self.default_console_style = QtGui.QTextCharFormat()
- # self.default_console_style.setFontPointSize(0.1)
- self.default_console_style.setForeground(
- QtGui.QColor.fromRgb(72, 200, 150))
-
- # style for error console text
- self.error_console_style = QtGui.QTextCharFormat()
- # self.error_console_style.setFontPointSize(0.1)
- self.error_console_style.setForeground(
- QtGui.QColor.fromRgb(184, 54, 19))
-
- QtGui.QFontDatabase.addApplicationFont(
- os.path.join(
- os.path.dirname(__file__), 'RobotoMono-Regular.ttf')
- )
- self._openpype_run_ready = False
+ self._secure_registry = secure_registry
+ self._controls_disabled = False
+ self._install_thread = None
+ self.resize(QtCore.QSize(self._width, self._height))
self._init_ui()
+ # Set stylesheet
+ self.setStyleSheet(load_stylesheet())
+
+ # Trigger Mongo URL validation
+ self._mongo_input.setText(self.mongo_url)
+
def _init_ui(self):
# basic visual style - dark background, light text
- self.setStyleSheet("""
- color: rgb(200, 200, 200);
- background-color: rgb(23, 23, 23);
- """)
-
- main = QtWidgets.QVBoxLayout(self)
# Main info
# --------------------------------------------------------------------
- self.main_label = QtWidgets.QLabel(
- """Welcome to OpenPype
-
- We've detected OpenPype is not configured yet. But don't worry,
- this is as easy as setting one or two things.
-
- """)
- self.main_label.setWordWrap(True)
- self.main_label.setStyleSheet("color: rgb(200, 200, 200);")
-
- # OpenPype path info
- # --------------------------------------------------------------------
-
- self.openpype_path_label = QtWidgets.QLabel(
- """This is Path to studio location where OpenPype versions
- are stored. It will be pre-filled if your MongoDB connection is
- already set and your studio defined this location.
-
- Leave it empty if you want to install OpenPype version that
- comes with this installation.
-
-
- If you want to just try OpenPype without installing, hit the
- middle button that states "run without installation".
-
- """
- )
-
- self.openpype_path_label.setWordWrap(True)
- self.openpype_path_label.setStyleSheet("color: rgb(150, 150, 150);")
-
- # Path/Url box | Select button
- # --------------------------------------------------------------------
-
- input_layout = QtWidgets.QHBoxLayout()
-
- input_layout.setContentsMargins(0, 10, 0, 10)
- self.user_input = FocusHandlingLineEdit()
-
- self.user_input.setPlaceholderText("Path to OpenPype versions")
- self.user_input.textChanged.connect(self._path_changed)
- self.user_input.setStyleSheet(
- ("color: rgb(233, 233, 233);"
- "background-color: rgb(64, 64, 64);"
- "padding: 0.5em;"
- "border: 1px solid rgb(32, 32, 32);")
- )
-
- self.user_input.setValidator(PathValidator(self.user_input))
-
- self._btn_select = QtWidgets.QPushButton("Select")
- self._btn_select.setToolTip(
- "Select OpenPype repository"
- )
- self._btn_select.setStyleSheet(
- ("color: rgb(64, 64, 64);"
- "background-color: rgb(72, 200, 150);"
- "padding: 0.5em;")
- )
- self._btn_select.setMaximumSize(100, 140)
- self._btn_select.clicked.connect(self._on_select_clicked)
-
- input_layout.addWidget(self.user_input)
- input_layout.addWidget(self._btn_select)
+ main_label = QtWidgets.QLabel("Welcome to OpenPype", self)
+ main_label.setWordWrap(True)
+ main_label.setObjectName("MainLabel")
# Mongo box | OK button
# --------------------------------------------------------------------
-
- self.mongo_label = QtWidgets.QLabel(
- """Enter URL for running MongoDB instance:"""
+ mongo_input = MongoUrlInput(self)
+ mongo_input.setPlaceholderText(
+ "Enter your database Address. Example: mongodb://192.168.1.10:2707"
)
- self.mongo_label.setWordWrap(True)
- self.mongo_label.setStyleSheet("color: rgb(150, 150, 150);")
+ mongo_messages_widget = QtWidgets.QWidget(self)
- class MongoWidget(QtWidgets.QWidget):
- """Widget to input mongodb URL."""
-
- def __init__(self, parent=None):
- self._btn_mongo = None
- super(MongoWidget, self).__init__(parent)
- mongo_layout = QtWidgets.QHBoxLayout()
- mongo_layout.setContentsMargins(0, 0, 0, 0)
- self._mongo_input = FocusHandlingLineEdit()
- self._mongo_input.setPlaceholderText("Mongo URL")
- self._mongo_input.textChanged.connect(self._mongo_changed)
- self._mongo_input.focusIn.connect(self._focus_in)
- self._mongo_input.focusOut.connect(self._focus_out)
- self._mongo_input.setValidator(
- MongoValidator(self._mongo_input))
- self._mongo_input.setStyleSheet(
- ("color: rgb(233, 233, 233);"
- "background-color: rgb(64, 64, 64);"
- "padding: 0.5em;"
- "border: 1px solid rgb(32, 32, 32);")
- )
-
- mongo_layout.addWidget(self._mongo_input)
- self.setLayout(mongo_layout)
-
- def _focus_out(self):
- self.validate_url()
-
- def _focus_in(self):
- self._mongo_input.setStyleSheet(
- """
- background-color: rgb(32, 32, 19);
- color: rgb(255, 190, 15);
- padding: 0.5em;
- border: 1px solid rgb(64, 64, 32);
- """
- )
-
- def _mongo_changed(self, mongo: str):
- self.parent().mongo_url = mongo
-
- def get_mongo_url(self) -> str:
- """Helper to get url from parent."""
- return self.parent().mongo_url
-
- def set_mongo_url(self, mongo: str):
- """Helper to set url to parent.
-
- Args:
- mongo (str): mongodb url string.
-
- """
- self._mongo_input.setText(mongo)
-
- def set_valid(self):
- """Set valid state on mongo url input."""
- self._mongo_input.setStyleSheet(
- """
- background-color: rgb(19, 19, 19);
- color: rgb(64, 230, 132);
- padding: 0.5em;
- border: 1px solid rgb(32, 64, 32);
- """
- )
- self.parent().install_button.setEnabled(True)
-
- def set_invalid(self):
- """Set invalid state on mongo url input."""
- self._mongo_input.setStyleSheet(
- """
- background-color: rgb(32, 19, 19);
- color: rgb(255, 69, 0);
- padding: 0.5em;
- border: 1px solid rgb(64, 32, 32);
- """
- )
- self.parent().install_button.setEnabled(False)
-
- def set_read_only(self, state: bool):
- """Set input read-only."""
- self._mongo_input.setReadOnly(state)
-
- def validate_url(self) -> bool:
- """Validate if entered url is ok.
-
- Returns:
- True if url is valid monogo string.
-
- """
- if self.parent().mongo_url == "":
- return False
-
- is_valid, reason_str = validate_mongo_connection(
- self.parent().mongo_url
- )
- if not is_valid:
- self.set_invalid()
- self.parent().update_console(f"!!! {reason_str}", True)
- return False
- else:
- self.set_valid()
- return True
-
- self._mongo = MongoWidget(self)
- if self.mongo_url:
- self._mongo.set_mongo_url(self.mongo_url)
-
- # Bottom button bar
- # --------------------------------------------------------------------
- bottom_widget = QtWidgets.QWidget()
- bottom_layout = QtWidgets.QHBoxLayout()
- openpype_logo_label = QtWidgets.QLabel("openpype logo")
- openpype_logo = QtGui.QPixmap(self._icon_path)
- # openpype_logo.scaled(
- # openpype_logo_label.width(),
- # openpype_logo_label.height(), QtCore.Qt.KeepAspectRatio)
- openpype_logo_label.setPixmap(openpype_logo)
- openpype_logo_label.setContentsMargins(10, 0, 0, 10)
-
- # install button - - - - - - - - - - - - - - - - - - - - - - - - - - -
- self.install_button = QtWidgets.QPushButton("Install")
- self.install_button.setStyleSheet(
- ("color: rgb(64, 64, 64);"
- "background-color: rgb(72, 200, 150);"
- "padding: 0.5em;")
+ mongo_connection_msg = QtWidgets.QLabel(mongo_messages_widget)
+ mongo_connection_msg.setVisible(True)
+ mongo_connection_msg.setTextInteractionFlags(
+ QtCore.Qt.TextSelectableByMouse
)
- self.install_button.setMinimumSize(64, 24)
- self.install_button.setToolTip("Install OpenPype")
- self.install_button.clicked.connect(self._on_ok_clicked)
- # run from current button - - - - - - - - - - - - - - - - - - - - - -
- self.run_button = QtWidgets.QPushButton("Run without installation")
- self.run_button.setStyleSheet(
- ("color: rgb(64, 64, 64);"
- "background-color: rgb(200, 164, 64);"
- "padding: 0.5em;")
- )
- self.run_button.setMinimumSize(64, 24)
- self.run_button.setToolTip("Run without installing Pype")
- self.run_button.clicked.connect(self._on_run_clicked)
-
- # install button - - - - - - - - - - - - - - - - - - - - - - - - - - -
- self._exit_button = QtWidgets.QPushButton("Exit")
- self._exit_button.setStyleSheet(
- ("color: rgb(64, 64, 64);"
- "background-color: rgb(128, 128, 128);"
- "padding: 0.5em;")
- )
- self._exit_button.setMinimumSize(64, 24)
- self._exit_button.setToolTip("Exit")
- self._exit_button.clicked.connect(self._on_exit_clicked)
-
- bottom_layout.setContentsMargins(0, 10, 10, 0)
- bottom_layout.setAlignment(QtCore.Qt.AlignVCenter)
- bottom_layout.addWidget(openpype_logo_label, 0, QtCore.Qt.AlignVCenter)
- bottom_layout.addStretch(1)
- bottom_layout.addWidget(self.install_button, 0, QtCore.Qt.AlignVCenter)
- bottom_layout.addWidget(self.run_button, 0, QtCore.Qt.AlignVCenter)
- bottom_layout.addWidget(self._exit_button, 0, QtCore.Qt.AlignVCenter)
-
- bottom_widget.setLayout(bottom_layout)
- bottom_widget.setStyleSheet("background-color: rgb(32, 32, 32);")
-
- # Console label
- # --------------------------------------------------------------------
- self._status_label = QtWidgets.QLabel("Console:")
- self._status_label.setContentsMargins(0, 10, 0, 10)
- self._status_label.setStyleSheet("color: rgb(61, 115, 97);")
-
- # Console
- # --------------------------------------------------------------------
- self._status_box = QtWidgets.QPlainTextEdit()
- self._status_box.setReadOnly(True)
- self._status_box.setCurrentCharFormat(self.default_console_style)
- self._status_box.setStyleSheet(
- """QPlainTextEdit {
- background-color: rgb(32, 32, 32);
- color: rgb(72, 200, 150);
- font-family: "Roboto Mono";
- font-size: 0.5em;
- border: 1px solid rgb(48, 48, 48);
- }
- QScrollBar:vertical {
- border: 1px solid rgb(61, 115, 97);
- background: #000;
- width:5px;
- margin: 0px 0px 0px 0px;
- }
- QScrollBar::handle:vertical {
- background: rgb(72, 200, 150);
- min-height: 0px;
- }
- QScrollBar::sub-page:vertical {
- background: rgb(31, 62, 50);
- }
- QScrollBar::add-page:vertical {
- background: rgb(31, 62, 50);
- }
- QScrollBar::add-line:vertical {
- background: rgb(72, 200, 150);
- height: 0px;
- subcontrol-position: bottom;
- subcontrol-origin: margin;
- }
- QScrollBar::sub-line:vertical {
- background: rgb(72, 200, 150);
- height: 0 px;
- subcontrol-position: top;
- subcontrol-origin: margin;
- }
- """
- )
+ mongo_messages_layout = QtWidgets.QVBoxLayout(mongo_messages_widget)
+ mongo_messages_layout.setContentsMargins(0, 0, 0, 0)
+ mongo_messages_layout.addWidget(mongo_connection_msg)
# Progress bar
# --------------------------------------------------------------------
- self._progress_bar = QtWidgets.QProgressBar()
- self._progress_bar.setValue(0)
- self._progress_bar.setAlignment(QtCore.Qt.AlignCenter)
- self._progress_bar.setTextVisible(False)
- # setting font and the size
- self._progress_bar.setFont(QtGui.QFont('Arial', 7))
- self._progress_bar.setStyleSheet(
- """QProgressBar:horizontal {
- height: 5px;
- border: 1px solid rgb(31, 62, 50);
- color: rgb(72, 200, 150);
- }
- QProgressBar::chunk:horizontal {
- background-color: rgb(72, 200, 150);
- }
- """
+ progress_bar = NiceProgressBar(self)
+ progress_bar.setAlignment(QtCore.Qt.AlignCenter)
+ progress_bar.setTextVisible(False)
+
+ # Console
+ # --------------------------------------------------------------------
+ console_widget = ConsoleWidget(self)
+
+ # Bottom button bar
+ # --------------------------------------------------------------------
+ bottom_widget = QtWidgets.QWidget(self)
+
+ btns_widget = QtWidgets.QWidget(bottom_widget)
+
+ openpype_logo_label = QtWidgets.QLabel("openpype logo", bottom_widget)
+ openpype_logo_label.setPixmap(self._pixmap_openpype_logo)
+
+ run_button = ButtonWithOptions(
+ self.commands,
+ btns_widget
)
+ run_button.setMinimumSize(64, 24)
+ run_button.setToolTip("Run OpenPype")
+
+ # install button - - - - - - - - - - - - - - - - - - - - - - - - - - -
+ exit_button = QtWidgets.QPushButton("Exit", btns_widget)
+ exit_button.setObjectName("ExitBtn")
+ exit_button.setFlat(True)
+ exit_button.setMinimumSize(64, 24)
+ exit_button.setToolTip("Exit")
+
+ btns_layout = QtWidgets.QHBoxLayout(btns_widget)
+ btns_layout.setContentsMargins(0, 0, 0, 0)
+ btns_layout.addWidget(run_button, 0)
+ btns_layout.addWidget(exit_button, 0)
+
+ bottom_layout = QtWidgets.QHBoxLayout(bottom_widget)
+ bottom_layout.setContentsMargins(0, 0, 0, 0)
+ bottom_layout.setAlignment(QtCore.Qt.AlignHCenter)
+ bottom_layout.addWidget(openpype_logo_label, 0)
+ bottom_layout.addStretch(1)
+ bottom_layout.addWidget(btns_widget, 0)
+
# add all to main
- main.addWidget(self.main_label, 0)
- main.addWidget(self.openpype_path_label, 0)
- main.addLayout(input_layout, 0)
- main.addWidget(self.mongo_label, 0)
- main.addWidget(self._mongo, 0)
+ main = QtWidgets.QVBoxLayout(self)
+ main.addSpacing(15)
+ main.addWidget(main_label, 0)
+ main.addSpacing(15)
+ main.addWidget(mongo_input, 0)
+ main.addWidget(mongo_messages_widget, 0)
- main.addWidget(self._status_label, 0)
- main.addWidget(self._status_box, 1)
+ main.addWidget(progress_bar, 0)
+ main.addSpacing(15)
+
+ main.addWidget(console_widget, 1)
- main.addWidget(self._progress_bar, 0)
main.addWidget(bottom_widget, 0)
- self.setLayout(main)
+ run_button.option_clicked.connect(self._on_run_btn_click)
+ exit_button.clicked.connect(self._on_exit_clicked)
+ mongo_input.textChanged.connect(self._on_mongo_url_change)
- # if mongo url is ok, try to get openpype path from there
- if self._mongo.validate_url() and len(self.path) == 0:
- self.path = get_openpype_path_from_db(self.mongo_url)
- self.user_input.setText(self.path)
+ self._console_widget = console_widget
- def _on_select_clicked(self):
- """Show directory dialog."""
- options = QtWidgets.QFileDialog.Options()
- options |= QtWidgets.QFileDialog.DontUseNativeDialog
- options |= QtWidgets.QFileDialog.ShowDirsOnly
+ self.main_label = main_label
- result = QtWidgets.QFileDialog.getExistingDirectory(
- parent=self,
- caption='Select path',
- directory=os.getcwd(),
- options=options)
+ self._mongo_input = mongo_input
- if not result:
+ self._mongo_connection_msg = mongo_connection_msg
+
+ self._run_button = run_button
+ self._exit_button = exit_button
+ self._progress_bar = progress_bar
+
+ def _on_run_btn_click(self, option):
+ # Disable buttons
+ self._disable_buttons()
+ # Set progress to any value
+ self._update_progress(1)
+ self._progress_bar.repaint()
+ # Add label to show that is connecting to mongo
+ self.set_invalid_mongo_connection(self.mongo_url, True)
+
+ # Process events to repaint changes
+ QtWidgets.QApplication.processEvents()
+
+ if not self.validate_url():
+ self._enable_buttons()
+ self._update_progress(0)
+ # Update any messages
+ self._mongo_input.setText(self.mongo_url)
return
- filename = QtCore.QDir.toNativeSeparators(result)
-
- if os.path.isdir(filename):
- self.path = filename
- self.user_input.setText(filename)
-
- def _on_run_clicked(self):
- valid, reason = validate_mongo_connection(
- self._mongo.get_mongo_url()
- )
- if not valid:
- self._mongo.set_invalid()
- self.update_console(f"!!! {reason}", True)
- return
+ if option == "run":
+ self._run_openpype()
+ elif option == "run_from_code":
+ self._run_openpype_from_code()
else:
- self._mongo.set_valid()
+ raise AssertionError("BUG: Unknown variant \"{}\"".format(option))
+
+ self._enable_buttons()
+
+ def _run_openpype_from_code(self):
+ self._secure_registry.set_item("openPypeMongo", self.mongo_url)
self.done(2)
- def _on_ok_clicked(self):
+ def _run_openpype(self):
"""Start install process.
This will once again validate entered path and mongo if ok, start
working thread that will do actual job.
"""
- valid, reason = validate_mongo_connection(
- self._mongo.get_mongo_url()
- )
- if not valid:
- self._mongo.set_invalid()
- self.update_console(f"!!! {reason}", True)
- return
- else:
- self._mongo.set_valid()
-
- if self._openpype_run_ready:
- self.done(3)
+ # Check if install thread is not already running
+ if self._install_thread and self._install_thread.isRunning():
return
- if self.path and len(self.path) > 0:
- valid, reason = validate_path_string(self.path)
+ self._mongo_input.set_valid()
- if not valid:
- self.update_console(f"!!! {reason}", True)
- return
+ install_thread = InstallThread(self)
+ install_thread.message.connect(self.update_console)
+ install_thread.progress.connect(self._update_progress)
+ install_thread.finished.connect(self._installation_finished)
+ install_thread.set_mongo(self.mongo_url)
- self._disable_buttons()
- self._install_thread = InstallThread(
- self.install_result_callback_handler, self)
- self._install_thread.message.connect(self.update_console)
- self._install_thread.progress.connect(self._update_progress)
- self._install_thread.finished.connect(self._enable_buttons)
- self._install_thread.set_path(self.path)
- self._install_thread.set_mongo(self._mongo.get_mongo_url())
- self._install_thread.start()
+ self._install_thread = install_thread
- def install_result_callback_handler(self, result: InstallResult):
- """Change button behaviour based on installation outcome."""
- status = result.status
+ install_thread.start()
+
+ def _installation_finished(self):
+ status = self._install_thread.result()
if status >= 0:
- self.install_button.setText("Run installed OpenPype")
- self._openpype_run_ready = True
+ self._update_progress(100)
+ QtWidgets.QApplication.processEvents()
+ self.done(3)
+ else:
+ self._show_console()
def _update_progress(self, progress: int):
self._progress_bar.setValue(progress)
+ text_visible = self._progress_bar.isTextVisible()
+ if progress == 0:
+ if text_visible:
+ self._progress_bar.setTextVisible(False)
+ elif not text_visible:
+ self._progress_bar.setTextVisible(True)
def _on_exit_clicked(self):
self.reject()
- def _path_changed(self, path: str) -> str:
- """Set path."""
- self.path = path
- return path
+ def _on_mongo_url_change(self, new_value):
+ # Strip the value
+ new_value = new_value.strip()
+ # Store new mongo url to variable
+ self.mongo_url = new_value
+
+ msg = None
+ # Change style of input
+ if not new_value:
+ self._mongo_input.remove_state()
+ elif not self.mongo_url_regex.match(new_value):
+ self._mongo_input.set_invalid()
+ msg = (
+ "Mongo URL should start with"
+ " \"mongodb://\" or \"mongodb+srv://\""
+ )
+ else:
+ self._mongo_input.set_valid()
+
+ self.set_invalid_mongo_url(msg)
+
+ def validate_url(self):
+ """Validate if entered url is ok.
+
+ Returns:
+ True if url is valid monogo string.
+
+ """
+ if self.mongo_url == "":
+ return False
+
+ is_valid, reason_str = validate_mongo_connection(self.mongo_url)
+ if not is_valid:
+ self.set_invalid_mongo_connection(self.mongo_url)
+ self._mongo_input.set_invalid()
+ self.update_console(f"!!! {reason_str}", True)
+ return False
+
+ self.set_invalid_mongo_connection(None)
+ self._mongo_input.set_valid()
+ return True
+
+ def set_invalid_mongo_url(self, reason):
+ if reason is None:
+ self._mongo_connection_msg.setText("")
+ else:
+ self._mongo_connection_msg.setText("- {}".format(reason))
+
+ def set_invalid_mongo_connection(self, mongo_url, connecting=False):
+ if mongo_url is None:
+ self.set_invalid_mongo_url(mongo_url)
+ return
+
+ if connecting:
+ msg = "Connecting to: {}".format(mongo_url)
+ else:
+ msg = "Can't connect to: {}".format(mongo_url)
+
+ self.set_invalid_mongo_url(msg)
def update_console(self, msg: str, error: bool = False) -> None:
"""Display message in console.
@@ -523,26 +500,22 @@ class InstallDialog(QtWidgets.QDialog):
msg (str): message.
error (bool): if True, print it red.
"""
- if not error:
- self._status_box.setCurrentCharFormat(self.default_console_style)
- else:
- self._status_box.setCurrentCharFormat(self.error_console_style)
- self._status_box.appendPlainText(msg)
+ self._console_widget.update_console(msg, error)
+
+ def _show_console(self):
+ self._console_widget.show_console()
+ self.updateGeometry()
def _disable_buttons(self):
"""Disable buttons so user interaction doesn't interfere."""
- self._btn_select.setEnabled(False)
- self.run_button.setEnabled(False)
self._exit_button.setEnabled(False)
- self.install_button.setEnabled(False)
+ self._run_button.setEnabled(False)
self._controls_disabled = True
def _enable_buttons(self):
"""Enable buttons after operation is complete."""
- self._btn_select.setEnabled(True)
- self.run_button.setEnabled(True)
self._exit_button.setEnabled(True)
- self.install_button.setEnabled(True)
+ self._run_button.setEnabled(True)
self._controls_disabled = False
def closeEvent(self, event): # noqa
@@ -552,212 +525,6 @@ class InstallDialog(QtWidgets.QDialog):
return super(InstallDialog, self).closeEvent(event)
-class MongoValidator(QValidator):
- """Validate mongodb url for Qt widgets."""
-
- def __init__(self, parent=None, intermediate=False):
- self.parent = parent
- self.intermediate = intermediate
- self._validate_lock = False
- self.timer = QTimer()
- self.timer.timeout.connect(self._unlock_validator)
- super().__init__(parent)
-
- def _unlock_validator(self):
- self._validate_lock = False
-
- def _return_state(
- self, state: QValidator.State, reason: str, mongo: str):
- """Set stylesheets and actions on parent based on state.
-
- Warning:
- This will always return `QValidator.State.Acceptable` as
- anything different will stop input to `QLineEdit`
-
- """
-
- if state == QValidator.State.Invalid:
- self.parent.setToolTip(reason)
- self.parent.setStyleSheet(
- """
- background-color: rgb(32, 19, 19);
- color: rgb(255, 69, 0);
- padding: 0.5em;
- border: 1px solid rgb(64, 32, 32);
- """
- )
- elif state == QValidator.State.Intermediate and self.intermediate:
- self.parent.setToolTip(reason)
- self.parent.setStyleSheet(
- """
- background-color: rgb(32, 32, 19);
- color: rgb(255, 190, 15);
- padding: 0.5em;
- border: 1px solid rgb(64, 64, 32);
- """
- )
- else:
- self.parent.setToolTip(reason)
- self.parent.setStyleSheet(
- """
- background-color: rgb(19, 19, 19);
- color: rgb(64, 230, 132);
- padding: 0.5em;
- border: 1px solid rgb(32, 64, 32);
- """
- )
-
- return QValidator.State.Acceptable, mongo, len(mongo)
-
- def validate(self, mongo: str, pos: int) -> (QValidator.State, str, int): # noqa
- """Validate entered mongodb connection string.
-
- As url (it should start with `mongodb://` or
- `mongodb+srv:// url schema.
-
- Args:
- mongo (str): connection string url.
- pos (int): current position.
-
- Returns:
- (QValidator.State.Acceptable, str, int):
- Indicate input state with color and always return
- Acceptable state as we need to be able to edit input further.
-
- """
- if not mongo.startswith("mongodb"):
- return self._return_state(
- QValidator.State.Invalid, "need mongodb schema", mongo)
-
- return self._return_state(
- QValidator.State.Intermediate, "", mongo)
-
-
-class PathValidator(MongoValidator):
- """Validate mongodb url for Qt widgets."""
-
- def validate(self, path: str, pos: int) -> (QValidator.State, str, int): # noqa
- """Validate path to be accepted by Igniter.
-
- Args:
- path (str): path to OpenPype.
- pos (int): current position.
-
- Returns:
- (QValidator.State.Acceptable, str, int):
- Indicate input state with color and always return
- Acceptable state as we need to be able to edit input further.
-
- """
- # allow empty path as that will use current version coming with
- # OpenPype Igniter
- if len(path) == 0:
- return self._return_state(
- QValidator.State.Acceptable, "Use version with Igniter", path)
-
- if len(path) > 3:
- valid, reason = validate_path_string(path)
- if not valid:
- return self._return_state(
- QValidator.State.Invalid, reason, path)
- else:
- return self._return_state(
- QValidator.State.Acceptable, reason, path)
-
-
-class CollapsibleWidget(QtWidgets.QWidget):
- """Collapsible widget to hide mongo url in necessary."""
-
- def __init__(self, parent=None, title: str = "", animation: int = 300):
- self._mainLayout = QtWidgets.QGridLayout(parent)
- self._toggleButton = QtWidgets.QToolButton(parent)
- self._headerLine = QtWidgets.QFrame(parent)
- self._toggleAnimation = QtCore.QParallelAnimationGroup(parent)
- self._contentArea = QtWidgets.QScrollArea(parent)
- self._animation = animation
- self._title = title
- super(CollapsibleWidget, self).__init__(parent)
- self._init_ui()
-
- def _init_ui(self):
- self._toggleButton.setStyleSheet(
- """QToolButton {
- border: none;
- }
- """)
- self._toggleButton.setToolButtonStyle(
- QtCore.Qt.ToolButtonTextBesideIcon)
-
- self._toggleButton.setArrowType(QtCore.Qt.ArrowType.RightArrow)
- self._toggleButton.setText(self._title)
- self._toggleButton.setCheckable(True)
- self._toggleButton.setChecked(False)
-
- self._headerLine.setFrameShape(QtWidgets.QFrame.HLine)
- self._headerLine.setFrameShadow(QtWidgets.QFrame.Sunken)
- self._headerLine.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
- QtWidgets.QSizePolicy.Maximum)
-
- self._contentArea.setStyleSheet(
- """QScrollArea {
- background-color: rgb(32, 32, 32);
- border: none;
- }
- """)
- self._contentArea.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
- QtWidgets.QSizePolicy.Fixed)
- self._contentArea.setMaximumHeight(0)
- self._contentArea.setMinimumHeight(0)
-
- self._toggleAnimation.addAnimation(
- QtCore.QPropertyAnimation(self, b"minimumHeight"))
- self._toggleAnimation.addAnimation(
- QtCore.QPropertyAnimation(self, b"maximumHeight"))
- self._toggleAnimation.addAnimation(
- QtCore.QPropertyAnimation(self._contentArea, b"maximumHeight"))
-
- self._mainLayout.setVerticalSpacing(0)
- self._mainLayout.setContentsMargins(0, 0, 0, 0)
-
- row = 0
-
- self._mainLayout.addWidget(
- self._toggleButton, row, 0, 1, 1, QtCore.Qt.AlignCenter)
- self._mainLayout.addWidget(
- self._headerLine, row, 2, 1, 1)
- row += row
- self._mainLayout.addWidget(self._contentArea, row, 0, 1, 3)
- self.setLayout(self._mainLayout)
-
- self._toggleButton.toggled.connect(self._toggle_action)
-
- def _toggle_action(self, collapsed: bool):
- arrow = QtCore.Qt.ArrowType.DownArrow if collapsed else QtCore.Qt.ArrowType.RightArrow # noqa: E501
- direction = QtCore.QAbstractAnimation.Forward if collapsed else QtCore.QAbstractAnimation.Backward # noqa: E501
- self._toggleButton.setArrowType(arrow)
- self._toggleAnimation.setDirection(direction)
- self._toggleAnimation.start()
-
- def setContentLayout(self, content_layout: QtWidgets.QLayout): # noqa
- self._contentArea.setLayout(content_layout)
- collapsed_height = \
- self.sizeHint().height() - self._contentArea.maximumHeight()
- content_height = self._contentArea.sizeHint().height()
-
- for i in range(self._toggleAnimation.animationCount() - 1):
- sec_anim = self._toggleAnimation.animationAt(i)
- sec_anim.setDuration(self._animation)
- sec_anim.setStartValue(collapsed_height)
- sec_anim.setEndValue(collapsed_height + content_height)
-
- con_anim = self._toggleAnimation.animationAt(
- self._toggleAnimation.animationCount() - 1)
-
- con_anim.setDuration(self._animation)
- con_anim.setStartValue(0)
- con_anim.setEndValue(collapsed_height + content_height)
-
-
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
d = InstallDialog()
diff --git a/igniter/install_thread.py b/igniter/install_thread.py
index df8b830209..383012b88b 100644
--- a/igniter/install_thread.py
+++ b/igniter/install_thread.py
@@ -17,12 +17,6 @@ from .bootstrap_repos import (
from .tools import validate_mongo_connection
-class InstallResult(QObject):
- """Used to pass results back."""
- def __init__(self, value):
- self.status = value
-
-
class InstallThread(QThread):
"""Install Worker thread.
@@ -36,15 +30,22 @@ class InstallThread(QThread):
"""
progress = Signal(int)
message = Signal((str, bool))
- finished = Signal(object)
- def __init__(self, callback, parent=None,):
+ def __init__(self, parent=None,):
self._mongo = None
self._path = None
- self.result_callback = callback
+ self._result = None
QThread.__init__(self, parent)
- self.finished.connect(callback)
+
+ def result(self):
+ """Result of finished installation."""
+ return self._result
+
+ def _set_result(self, value):
+ if self._result is not None:
+ raise AssertionError("BUG: Result was set more than once!")
+ self._result = value
def run(self):
"""Thread entry point.
@@ -76,7 +77,7 @@ class InstallThread(QThread):
except ValueError:
self.message.emit(
"!!! We need MongoDB URL to proceed.", True)
- self.finished.emit(InstallResult(-1))
+ self._set_result(-1)
return
else:
self._mongo = os.getenv("OPENPYPE_MONGO")
@@ -101,7 +102,7 @@ class InstallThread(QThread):
self.message.emit("Skipping OpenPype install ...", False)
if detected[-1].path.suffix.lower() == ".zip":
bs.extract_openpype(detected[-1])
- self.finished.emit(InstallResult(0))
+ self._set_result(0)
return
if OpenPypeVersion(version=local_version).get_main_version() == detected[-1].get_main_version(): # noqa
@@ -110,7 +111,7 @@ class InstallThread(QThread):
f"currently running {local_version}"
), False)
self.message.emit("Skipping OpenPype install ...", False)
- self.finished.emit(InstallResult(0))
+ self._set_result(0)
return
self.message.emit((
@@ -126,13 +127,13 @@ class InstallThread(QThread):
if not openpype_version:
self.message.emit(
f"!!! Install failed - {openpype_version}", True)
- self.finished.emit(InstallResult(-1))
+ self._set_result(-1)
return
self.message.emit(f"Using: {openpype_version}", False)
bs.install_version(openpype_version)
self.message.emit(f"Installed as {openpype_version}", False)
self.progress.emit(100)
- self.finished.emit(InstallResult(1))
+ self._set_result(1)
return
else:
self.message.emit("None detected.", False)
@@ -144,7 +145,7 @@ class InstallThread(QThread):
if not local_openpype:
self.message.emit(
f"!!! Install failed - {local_openpype}", True)
- self.finished.emit(InstallResult(-1))
+ self._set_result(-1)
return
try:
@@ -154,11 +155,12 @@ class InstallThread(QThread):
OpenPypeVersionIOError) as e:
self.message.emit(f"Installed failed: ", True)
self.message.emit(str(e), True)
- self.finished.emit(InstallResult(-1))
+ self._set_result(-1)
return
self.message.emit(f"Installed as {local_openpype}", False)
self.progress.emit(100)
+ self._set_result(1)
return
else:
# if we have mongo connection string, validate it, set it to
@@ -167,7 +169,7 @@ class InstallThread(QThread):
if not validate_mongo_connection(self._mongo):
self.message.emit(
f"!!! invalid mongo url {self._mongo}", True)
- self.finished.emit(InstallResult(-1))
+ self._set_result(-1)
return
bs.secure_registry.set_item("openPypeMongo", self._mongo)
os.environ["OPENPYPE_MONGO"] = self._mongo
@@ -177,11 +179,11 @@ class InstallThread(QThread):
if not repo_file:
self.message.emit("!!! Cannot install", True)
- self.finished.emit(InstallResult(-1))
+ self._set_result(-1)
return
self.progress.emit(100)
- self.finished.emit(InstallResult(1))
+ self._set_result(1)
return
def set_path(self, path: str) -> None:
diff --git a/igniter/openpype.icns b/igniter/openpype.icns
new file mode 100644
index 0000000000..792f819ad9
Binary files /dev/null and b/igniter/openpype.icns differ
diff --git a/igniter/stylesheet.css b/igniter/stylesheet.css
new file mode 100644
index 0000000000..8df2621d83
--- /dev/null
+++ b/igniter/stylesheet.css
@@ -0,0 +1,280 @@
+*{
+ font-size: 10pt;
+ font-family: "Poppins";
+}
+
+QWidget {
+ color: #bfccd6;
+ background-color: #282C34;
+ border-radius: 0px;
+}
+
+QMenu {
+ border: 1px solid #555555;
+ background-color: #21252B;
+}
+
+QMenu::item {
+ padding: 5px 10px 5px 10px;
+ border-left: 5px solid #313741;;
+}
+
+QMenu::item:selected {
+ border-left-color: rgb(84, 209, 178);
+ background-color: #222d37;
+}
+
+QLineEdit, QPlainTextEdit {
+ border: 1px solid #464b54;
+ border-radius: 3px;
+ background-color: #21252B;
+ padding: 0.5em;
+}
+
+QLineEdit[state="valid"] {
+ background-color: rgb(19, 19, 19);
+ color: rgb(64, 230, 132);
+ border-color: rgb(32, 64, 32);
+}
+
+QLineEdit[state="invalid"] {
+ background-color: rgb(32, 19, 19);
+ color: rgb(255, 69, 0);
+ border-color: rgb(64, 32, 32);
+}
+
+QLabel {
+ background: transparent;
+ color: #969b9e;
+}
+
+QLabel:hover {color: #b8c1c5;}
+
+QPushButton {
+ border: 1px solid #aaaaaa;
+ border-radius: 3px;
+ padding: 5px;
+}
+
+QPushButton:hover {
+ background-color: #333840;
+ border: 1px solid #fff;
+ color: #fff;
+}
+
+QTableView {
+ border: 1px solid #444;
+ gridline-color: #6c6c6c;
+ background-color: #201F1F;
+ alternate-background-color:#21252B;
+}
+
+QTableView::item:pressed, QListView::item:pressed, QTreeView::item:pressed {
+ background: #78879b;
+ color: #FFFFFF;
+}
+
+QTableView::item:selected:active, QTreeView::item:selected:active, QListView::item:selected:active {
+ background: #3d8ec9;
+}
+
+QProgressBar {
+ border: 1px solid grey;
+ border-radius: 10px;
+ color: #222222;
+ font-weight: bold;
+}
+QProgressBar:horizontal {
+ height: 20px;
+}
+
+QProgressBar::chunk {
+ border-radius: 10px;
+ background-color: qlineargradient(
+ x1: 0,
+ y1: 0.5,
+ x2: 1,
+ y2: 0.5,
+ stop: 0 rgb(72, 200, 150),
+ stop: 1 rgb(82, 172, 215)
+ );
+}
+
+
+QScrollBar:horizontal {
+ height: 15px;
+ margin: 3px 15px 3px 15px;
+ border: 1px transparent #21252B;
+ border-radius: 4px;
+ background-color: #21252B;
+}
+
+QScrollBar::handle:horizontal {
+ background-color: #4B5362;
+ min-width: 5px;
+ border-radius: 4px;
+}
+
+QScrollBar::add-line:horizontal {
+ margin: 0px 3px 0px 3px;
+ border-image: url(:/qss_icons/rc/right_arrow_disabled.png);
+ width: 10px;
+ height: 10px;
+ subcontrol-position: right;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::sub-line:horizontal {
+ margin: 0px 3px 0px 3px;
+ border-image: url(:/qss_icons/rc/left_arrow_disabled.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: left;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::add-line:horizontal:hover,QScrollBar::add-line:horizontal:on {
+ border-image: url(:/qss_icons/rc/right_arrow.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: right;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::sub-line:horizontal:hover, QScrollBar::sub-line:horizontal:on {
+ border-image: url(:/qss_icons/rc/left_arrow.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: left;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::up-arrow:horizontal, QScrollBar::down-arrow:horizontal {
+ background: none;
+}
+
+QScrollBar::add-page:horizontal, QScrollBar::sub-page:horizontal {
+ background: none;
+}
+
+QScrollBar:vertical {
+ background-color: #21252B;
+ width: 15px;
+ margin: 15px 3px 15px 3px;
+ border: 1px transparent #21252B;
+ border-radius: 4px;
+}
+
+QScrollBar::handle:vertical {
+ background-color: #4B5362;
+ min-height: 5px;
+ border-radius: 4px;
+}
+
+QScrollBar::sub-line:vertical {
+ margin: 3px 0px 3px 0px;
+ border-image: url(:/qss_icons/rc/up_arrow_disabled.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: top;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::add-line:vertical {
+ margin: 3px 0px 3px 0px;
+ border-image: url(:/qss_icons/rc/down_arrow_disabled.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: bottom;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::sub-line:vertical:hover,QScrollBar::sub-line:vertical:on {
+
+ border-image: url(:/qss_icons/rc/up_arrow.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: top;
+ subcontrol-origin: margin;
+}
+
+
+QScrollBar::add-line:vertical:hover, QScrollBar::add-line:vertical:on {
+ border-image: url(:/qss_icons/rc/down_arrow.png);
+ height: 10px;
+ width: 10px;
+ subcontrol-position: bottom;
+ subcontrol-origin: margin;
+}
+
+QScrollBar::up-arrow:vertical, QScrollBar::down-arrow:vertical {
+ background: none;
+}
+
+
+QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical {
+ background: none;
+}
+
+#MainLabel {
+ color: rgb(200, 200, 200);
+ font-size: 12pt;
+}
+
+#Console {
+ background-color: #21252B;
+ color: rgb(72, 200, 150);
+ font-family: "Roboto Mono";
+ font-size: 8pt;
+}
+
+#ExitBtn {
+ /* `border` must be set to background of flat button is painted .*/
+ border: none;
+ color: rgb(39, 39, 39);
+ background-color: #828a97;
+ padding: 0.5em;
+ font-weight: 400;
+}
+
+#ExitBtn:hover{
+ background-color: #b2bece
+}
+#ExitBtn:disabled {
+ background-color: rgba(185, 185, 185, 31);
+ color: rgba(64, 64, 64, 63);
+}
+
+#ButtonWithOptions QPushButton{
+ border-top-right-radius: 0px;
+ border-bottom-right-radius: 0px;
+ border: none;
+ background-color: rgb(84, 209, 178);
+ color: rgb(39, 39, 39);
+ font-weight: 400;
+ padding: 0.5em;
+}
+#ButtonWithOptions QPushButton:hover{
+ background-color: rgb(85, 224, 189)
+}
+#ButtonWithOptions QPushButton:disabled {
+ background-color: rgba(72, 200, 150, 31);
+ color: rgba(64, 64, 64, 63);
+}
+
+#ButtonWithOptions QToolButton{
+ border: none;
+ border-top-left-radius: 0px;
+ border-bottom-left-radius: 0px;
+ border-top-right-radius: 3px;
+ border-bottom-right-radius: 3px;
+ background-color: rgb(84, 209, 178);
+ color: rgb(39, 39, 39);
+}
+#ButtonWithOptions QToolButton:hover{
+ background-color: rgb(85, 224, 189)
+}
+#ButtonWithOptions QToolButton:disabled {
+ background-color: rgba(72, 200, 150, 31);
+ color: rgba(64, 64, 64, 63);
+}
diff --git a/igniter/tools.py b/igniter/tools.py
index ff2db6bc7e..529d535c25 100644
--- a/igniter/tools.py
+++ b/igniter/tools.py
@@ -14,7 +14,12 @@ from pathlib import Path
import platform
from pymongo import MongoClient
-from pymongo.errors import ServerSelectionTimeoutError, InvalidURI
+from pymongo.errors import (
+ ServerSelectionTimeoutError,
+ InvalidURI,
+ ConfigurationError,
+ OperationFailure
+)
def decompose_url(url: str) -> Dict:
@@ -115,30 +120,20 @@ def validate_mongo_connection(cnx: str) -> (bool, str):
parsed = urlparse(cnx)
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
return False, "Not mongodb schema"
- # we have mongo connection string. Let's try if we can connect.
- try:
- components = decompose_url(cnx)
- except RuntimeError:
- return False, f"Invalid port specified."
-
- mongo_args = {
- "host": compose_url(**components),
- "serverSelectionTimeoutMS": 2000
- }
- port = components.get("port")
- if port is not None:
- mongo_args["port"] = int(port)
try:
- client = MongoClient(**mongo_args)
+ client = MongoClient(
+ cnx,
+ serverSelectionTimeoutMS=2000
+ )
client.server_info()
client.close()
except ServerSelectionTimeoutError as e:
return False, f"Cannot connect to server {cnx} - {e}"
except ValueError:
return False, f"Invalid port specified {parsed.port}"
- except InvalidURI as e:
- return False, str(e)
+ except (ConfigurationError, OperationFailure, InvalidURI) as exc:
+ return False, str(exc)
else:
return True, "Connection is successful"
diff --git a/inno_setup.iss b/inno_setup.iss
new file mode 100644
index 0000000000..ead9907955
--- /dev/null
+++ b/inno_setup.iss
@@ -0,0 +1,50 @@
+; Script generated by the Inno Setup Script Wizard.
+; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
+
+
+#define MyAppName "OpenPype"
+#define Build GetEnv("BUILD_DIR")
+#define AppVer GetEnv("BUILD_VERSION")
+
+
+[Setup]
+; NOTE: The value of AppId uniquely identifies this application. Do not use the same AppId value in installers for other applications.
+; (To generate a new GUID, click Tools | Generate GUID inside the IDE.)
+AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93}
+AppName={#MyAppName}
+AppVersion={#AppVer}
+AppVerName={#MyAppName} version {#AppVer}
+AppPublisher=Orbi Tools s.r.o
+AppPublisherURL=http://pype.club
+AppSupportURL=http://pype.club
+AppUpdatesURL=http://pype.club
+DefaultDirName={autopf}\{#MyAppName}
+DisableProgramGroupPage=yes
+OutputBaseFilename={#MyAppName}-{#AppVer}-install
+AllowCancelDuringInstall=yes
+; Uncomment the following line to run in non administrative install mode (install for current user only.)
+;PrivilegesRequired=lowest
+PrivilegesRequiredOverridesAllowed=dialog
+SetupIconFile=igniter\openpype.ico
+OutputDir=build\
+Compression=lzma
+SolidCompression=yes
+WizardStyle=modern
+
+[Languages]
+Name: "english"; MessagesFile: "compiler:Default.isl"
+
+[Tasks]
+Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
+
+[Files]
+Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
+; NOTE: Don't use "Flags: ignoreversion" on any shared system files
+
+[Icons]
+Name: "{autoprograms}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"
+Name: "{autodesktop}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"; Tasks: desktopicon
+
+[Run]
+Filename: "{app}\openpype_gui.exe"; Description: "{cm:LaunchProgram,OpenPype}"; Flags: nowait postinstall skipifsilent
+
diff --git a/openpype/__init__.py b/openpype/__init__.py
index edd48a018d..f63d534e08 100644
--- a/openpype/__init__.py
+++ b/openpype/__init__.py
@@ -9,6 +9,7 @@ from .settings import get_project_settings
from .lib import (
Anatomy,
filter_pyblish_plugins,
+ set_plugin_attributes_from_settings,
change_timer_to_current_context
)
@@ -58,38 +59,8 @@ def patched_discover(superclass):
# run original discover and get plugins
plugins = _original_discover(superclass)
- # determine host application to use for finding presets
- if avalon.registered_host() is None:
- return plugins
- host = avalon.registered_host().__name__.split(".")[-1]
+ set_plugin_attributes_from_settings(plugins, superclass)
- # map plugin superclass to preset json. Currenly suppoted is load and
- # create (avalon.api.Loader and avalon.api.Creator)
- plugin_type = "undefined"
- if superclass.__name__.split(".")[-1] == "Loader":
- plugin_type = "load"
- elif superclass.__name__.split(".")[-1] == "Creator":
- plugin_type = "create"
-
- print(">>> Finding presets for {}:{} ...".format(host, plugin_type))
- try:
- settings = (
- get_project_settings(os.environ['AVALON_PROJECT'])
- [host][plugin_type]
- )
- except KeyError:
- print("*** no presets found.")
- else:
- for plugin in plugins:
- if plugin.__name__ in settings:
- print(">>> We have preset for {}".format(plugin.__name__))
- for option, value in settings[plugin.__name__].items():
- if option == "enabled" and value is False:
- setattr(plugin, "active", False)
- print(" - is disabled by preset")
- else:
- setattr(plugin, option, value)
- print(" - setting `{}`: `{}`".format(option, value))
return plugins
diff --git a/openpype/hooks/pre_python2_vendor.py b/openpype/hooks/pre_python_2_prelaunch.py
similarity index 87%
rename from openpype/hooks/pre_python2_vendor.py
rename to openpype/hooks/pre_python_2_prelaunch.py
index 35f5ff1a45..84272d2e5d 100644
--- a/openpype/hooks/pre_python2_vendor.py
+++ b/openpype/hooks/pre_python_2_prelaunch.py
@@ -4,11 +4,12 @@ from openpype.lib import PreLaunchHook
class PrePython2Vendor(PreLaunchHook):
"""Prepend python 2 dependencies for py2 hosts."""
- # WARNING This hook will probably be deprecated in OpenPype 3 - kept for test
order = 10
- app_groups = ["hiero", "nuke", "nukex", "unreal"]
def execute(self):
+ if not self.application.use_python_2:
+ return
+
# Prepare vendor dir path
self.log.info("adding global python 2 vendor")
pype_root = os.getenv("OPENPYPE_REPOS_ROOT")
diff --git a/openpype/hosts/aftereffects/api/__init__.py b/openpype/hosts/aftereffects/api/__init__.py
index 9a80801652..e914c26435 100644
--- a/openpype/hosts/aftereffects/api/__init__.py
+++ b/openpype/hosts/aftereffects/api/__init__.py
@@ -5,7 +5,7 @@ import logging
from avalon import io
from avalon import api as avalon
from avalon.vendor import Qt
-from openpype import lib
+from openpype import lib, api
import pyblish.api as pyblish
import openpype.hosts.aftereffects
@@ -81,3 +81,69 @@ def uninstall():
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle layer visibility on instance toggles."""
instance[0].Visible = new_value
+
+
+def get_asset_settings():
+ """Get settings on current asset from database.
+
+ Returns:
+ dict: Scene data.
+
+ """
+ asset_data = lib.get_asset()["data"]
+ fps = asset_data.get("fps")
+ frame_start = asset_data.get("frameStart")
+ frame_end = asset_data.get("frameEnd")
+ handle_start = asset_data.get("handleStart")
+ handle_end = asset_data.get("handleEnd")
+ resolution_width = asset_data.get("resolutionWidth")
+ resolution_height = asset_data.get("resolutionHeight")
+ duration = (frame_end - frame_start + 1) + handle_start + handle_end
+ entity_type = asset_data.get("entityType")
+
+ scene_data = {
+ "fps": fps,
+ "frameStart": frame_start,
+ "frameEnd": frame_end,
+ "handleStart": handle_start,
+ "handleEnd": handle_end,
+ "resolutionWidth": resolution_width,
+ "resolutionHeight": resolution_height,
+ "duration": duration
+ }
+
+ try:
+ # temporary, in pype3 replace with api.get_current_project_settings
+ skip_resolution_check = (
+ api.get_current_project_settings()
+ ["plugins"]
+ ["aftereffects"]
+ ["publish"]
+ ["ValidateSceneSettings"]
+ ["skip_resolution_check"]
+ )
+ skip_timelines_check = (
+ api.get_current_project_settings()
+ ["plugins"]
+ ["aftereffects"]
+ ["publish"]
+ ["ValidateSceneSettings"]
+ ["skip_timelines_check"]
+ )
+ except KeyError:
+ skip_resolution_check = ['*']
+ skip_timelines_check = ['*']
+
+ if os.getenv('AVALON_TASK') in skip_resolution_check or \
+ '*' in skip_timelines_check:
+ scene_data.pop("resolutionWidth")
+ scene_data.pop("resolutionHeight")
+
+ if entity_type in skip_timelines_check or '*' in skip_timelines_check:
+ scene_data.pop('fps', None)
+ scene_data.pop('frameStart', None)
+ scene_data.pop('frameEnd', None)
+ scene_data.pop('handleStart', None)
+ scene_data.pop('handleEnd', None)
+
+ return scene_data
diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_render.py b/openpype/hosts/aftereffects/plugins/publish/collect_render.py
index ba64551283..baac64ed0c 100644
--- a/openpype/hosts/aftereffects/plugins/publish/collect_render.py
+++ b/openpype/hosts/aftereffects/plugins/publish/collect_render.py
@@ -12,6 +12,7 @@ class AERenderInstance(RenderInstance):
# extend generic, composition name is needed
comp_name = attr.ib(default=None)
comp_id = attr.ib(default=None)
+ fps = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
@@ -45,6 +46,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
raise ValueError("Couldn't find id, unable to publish. " +
"Please recreate instance.")
item_id = inst["members"][0]
+
work_area_info = self.stub.get_work_area(int(item_id))
if not work_area_info:
@@ -57,6 +59,8 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
frameEnd = round(work_area_info.workAreaStart +
float(work_area_info.workAreaDuration) *
float(work_area_info.frameRate)) - 1
+ fps = work_area_info.frameRate
+ # TODO add resolution when supported by extension
if inst["family"] == "render" and inst["active"]:
instance = AERenderInstance(
@@ -86,7 +90,8 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
frameStart=frameStart,
frameEnd=frameEnd,
frameStep=1,
- toBeRenderedOn='deadline'
+ toBeRenderedOn='deadline',
+ fps=fps
)
comp = compositions_by_id.get(int(item_id))
@@ -102,7 +107,6 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
instances.append(instance)
- self.log.debug("instances::{}".format(instances))
return instances
def get_expected_files(self, render_instance):
diff --git a/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py b/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py
new file mode 100644
index 0000000000..cc7db3141f
--- /dev/null
+++ b/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py
@@ -0,0 +1,110 @@
+# -*- coding: utf-8 -*-
+"""Validate scene settings."""
+import os
+
+import pyblish.api
+
+from avalon import aftereffects
+
+import openpype.hosts.aftereffects.api as api
+
+stub = aftereffects.stub()
+
+
+class ValidateSceneSettings(pyblish.api.InstancePlugin):
+ """
+ Ensures that Composition Settings (right mouse on comp) are same as
+ in FTrack on task.
+
+ By default checks only duration - how many frames should be rendered.
+ Compares:
+ Frame start - Frame end + 1 from FTrack
+ against
+ Duration in Composition Settings.
+
+ If this complains:
+ Check error message where is discrepancy.
+ Check FTrack task 'pype' section of task attributes for expected
+ values.
+ Check/modify rendered Composition Settings.
+
+ If you know what you are doing run publishing again, uncheck this
+ validation before Validation phase.
+ """
+
+ """
+ Dev docu:
+ Could be configured by 'presets/plugins/aftereffects/publish'
+
+ skip_timelines_check - fill task name for which skip validation of
+ frameStart
+ frameEnd
+ fps
+ handleStart
+ handleEnd
+ skip_resolution_check - fill entity type ('asset') to skip validation
+ resolutionWidth
+ resolutionHeight
+ TODO support in extension is missing for now
+
+ By defaults validates duration (how many frames should be published)
+ """
+
+ order = pyblish.api.ValidatorOrder
+ label = "Validate Scene Settings"
+ families = ["render.farm"]
+ hosts = ["aftereffects"]
+ optional = True
+
+ skip_timelines_check = ["*"] # * >> skip for all
+ skip_resolution_check = ["*"]
+
+ def process(self, instance):
+ """Plugin entry point."""
+ expected_settings = api.get_asset_settings()
+ self.log.info("expected_settings::{}".format(expected_settings))
+
+ # handle case where ftrack uses only two decimal places
+ # 23.976023976023978 vs. 23.98
+ fps = instance.data.get("fps")
+ if fps:
+ if isinstance(fps, float):
+ fps = float(
+ "{:.2f}".format(fps))
+ expected_settings["fps"] = fps
+
+ duration = instance.data.get("frameEndHandle") - \
+ instance.data.get("frameStartHandle") + 1
+
+ current_settings = {
+ "fps": fps,
+ "frameStartHandle": instance.data.get("frameStartHandle"),
+ "frameEndHandle": instance.data.get("frameEndHandle"),
+ "resolutionWidth": instance.data.get("resolutionWidth"),
+ "resolutionHeight": instance.data.get("resolutionHeight"),
+ "duration": duration
+ }
+ self.log.info("current_settings:: {}".format(current_settings))
+
+ invalid_settings = []
+ for key, value in expected_settings.items():
+ if value != current_settings[key]:
+ invalid_settings.append(
+ "{} expected: {} found: {}".format(key, value,
+ current_settings[key])
+ )
+
+ if ((expected_settings.get("handleStart")
+ or expected_settings.get("handleEnd"))
+ and invalid_settings):
+ msg = "Handles included in calculation. Remove handles in DB " +\
+ "or extend frame range in Composition Setting."
+ invalid_settings[-1]["reason"] = msg
+
+ msg = "Found invalid settings:\n{}".format(
+ "\n".join(invalid_settings)
+ )
+ assert not invalid_settings, msg
+ assert os.path.exists(instance.data.get("source")), (
+ "Scene file not found (saved under wrong name)"
+ )
diff --git a/openpype/hosts/blender/api/plugin.py b/openpype/hosts/blender/api/plugin.py
index eb88e7af63..de30da3319 100644
--- a/openpype/hosts/blender/api/plugin.py
+++ b/openpype/hosts/blender/api/plugin.py
@@ -9,7 +9,7 @@ from avalon import api
import avalon.blender
from openpype.api import PypeCreatorMixin
-VALID_EXTENSIONS = [".blend", ".json"]
+VALID_EXTENSIONS = [".blend", ".json", ".abc"]
def asset_name(
diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py
index 088a27566d..6d253300d9 100644
--- a/openpype/hosts/blender/hooks/pre_pyside_install.py
+++ b/openpype/hosts/blender/hooks/pre_pyside_install.py
@@ -1,4 +1,5 @@
import os
+import re
import subprocess
from openpype.lib import PreLaunchHook
@@ -31,10 +32,46 @@ class InstallPySideToBlender(PreLaunchHook):
def inner_execute(self):
# Get blender's python directory
+ version_regex = re.compile(r"^2\.[0-9]{2}$")
+
executable = self.launch_context.executable.executable_path
- # Blender installation contain subfolder named with it's version where
- # python binaries are stored.
- version_subfolder = self.launch_context.app_name.split("_")[1]
+ if os.path.basename(executable).lower() != "blender.exe":
+ self.log.info((
+ "Executable does not lead to blender.exe file. Can't determine"
+ " blender's python to check/install PySide2."
+ ))
+ return
+
+ executable_dir = os.path.dirname(executable)
+ version_subfolders = []
+ for name in os.listdir(executable_dir):
+ fullpath = os.path.join(name, executable_dir)
+ if not os.path.isdir(fullpath):
+ continue
+
+ if not version_regex.match(name):
+ continue
+
+ version_subfolders.append(name)
+
+ if not version_subfolders:
+ self.log.info(
+ "Didn't find version subfolder next to Blender executable"
+ )
+ return
+
+ if len(version_subfolders) > 1:
+ self.log.info((
+ "Found more than one version subfolder next"
+ " to blender executable. {}"
+ ).format(", ".join([
+ '"./{}"'.format(name)
+ for name in version_subfolders
+ ])))
+ return
+
+ version_subfolder = version_subfolders[0]
+
pythond_dir = os.path.join(
os.path.dirname(executable),
version_subfolder,
@@ -65,6 +102,7 @@ class InstallPySideToBlender(PreLaunchHook):
# Check if PySide2 is installed and skip if yes
if self.is_pyside_installed(python_executable):
+ self.log.debug("Blender has already installed PySide2.")
return
# Install PySide2 in blender's python
diff --git a/openpype/hosts/blender/plugins/create/create_pointcache.py b/openpype/hosts/blender/plugins/create/create_pointcache.py
new file mode 100644
index 0000000000..03a468f82e
--- /dev/null
+++ b/openpype/hosts/blender/plugins/create/create_pointcache.py
@@ -0,0 +1,35 @@
+"""Create a pointcache asset."""
+
+import bpy
+
+from avalon import api
+from avalon.blender import lib
+import openpype.hosts.blender.api.plugin
+
+
+class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
+ """Polygonal static geometry"""
+
+ name = "pointcacheMain"
+ label = "Point Cache"
+ family = "pointcache"
+ icon = "gears"
+
+ def process(self):
+
+ asset = self.data["asset"]
+ subset = self.data["subset"]
+ name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
+ collection = bpy.data.collections.new(name=name)
+ bpy.context.scene.collection.children.link(collection)
+ self.data['task'] = api.Session.get('AVALON_TASK')
+ lib.imprint(collection, self.data)
+
+ if (self.options or {}).get("useSelection"):
+ objects = lib.get_selection()
+ for obj in objects:
+ collection.objects.link(obj)
+ if obj.type == 'EMPTY':
+ objects.extend(obj.children)
+
+ return collection
diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py
new file mode 100644
index 0000000000..4248cffd69
--- /dev/null
+++ b/openpype/hosts/blender/plugins/load/load_abc.py
@@ -0,0 +1,246 @@
+"""Load an asset in Blender from an Alembic file."""
+
+from pathlib import Path
+from pprint import pformat
+from typing import Dict, List, Optional
+
+from avalon import api, blender
+import bpy
+import openpype.hosts.blender.api.plugin as plugin
+
+
+class CacheModelLoader(plugin.AssetLoader):
+ """Load cache models.
+
+ Stores the imported asset in a collection named after the asset.
+
+ Note:
+ At least for now it only supports Alembic files.
+ """
+
+ families = ["model", "pointcache"]
+ representations = ["abc"]
+
+ label = "Link Alembic"
+ icon = "code-fork"
+ color = "orange"
+
+ def _remove(self, objects, container):
+ for obj in list(objects):
+ if obj.type == 'MESH':
+ bpy.data.meshes.remove(obj.data)
+ elif obj.type == 'EMPTY':
+ bpy.data.objects.remove(obj)
+
+ bpy.data.collections.remove(container)
+
+ def _process(self, libpath, container_name, parent_collection):
+ bpy.ops.object.select_all(action='DESELECT')
+
+ view_layer = bpy.context.view_layer
+ view_layer_collection = view_layer.active_layer_collection.collection
+
+ relative = bpy.context.preferences.filepaths.use_relative_paths
+ bpy.ops.wm.alembic_import(
+ filepath=libpath,
+ relative_path=relative
+ )
+
+ parent = parent_collection
+
+ if parent is None:
+ parent = bpy.context.scene.collection
+
+ model_container = bpy.data.collections.new(container_name)
+ parent.children.link(model_container)
+ for obj in bpy.context.selected_objects:
+ model_container.objects.link(obj)
+ view_layer_collection.objects.unlink(obj)
+
+ name = obj.name
+ obj.name = f"{name}:{container_name}"
+
+ # Groups are imported as Empty objects in Blender
+ if obj.type == 'MESH':
+ data_name = obj.data.name
+ obj.data.name = f"{data_name}:{container_name}"
+
+ if not obj.get(blender.pipeline.AVALON_PROPERTY):
+ obj[blender.pipeline.AVALON_PROPERTY] = dict()
+
+ avalon_info = obj[blender.pipeline.AVALON_PROPERTY]
+ avalon_info.update({"container_name": container_name})
+
+ bpy.ops.object.select_all(action='DESELECT')
+
+ return model_container
+
+ def process_asset(
+ self, context: dict, name: str, namespace: Optional[str] = None,
+ options: Optional[Dict] = None
+ ) -> Optional[List]:
+ """
+ Arguments:
+ name: Use pre-defined name
+ namespace: Use pre-defined namespace
+ context: Full parenthood of representation to load
+ options: Additional settings dictionary
+ """
+
+ libpath = self.fname
+ asset = context["asset"]["name"]
+ subset = context["subset"]["name"]
+
+ lib_container = plugin.asset_name(
+ asset, subset
+ )
+ unique_number = plugin.get_unique_number(
+ asset, subset
+ )
+ namespace = namespace or f"{asset}_{unique_number}"
+ container_name = plugin.asset_name(
+ asset, subset, unique_number
+ )
+
+ container = bpy.data.collections.new(lib_container)
+ container.name = container_name
+ blender.pipeline.containerise_existing(
+ container,
+ name,
+ namespace,
+ context,
+ self.__class__.__name__,
+ )
+
+ container_metadata = container.get(
+ blender.pipeline.AVALON_PROPERTY)
+
+ container_metadata["libpath"] = libpath
+ container_metadata["lib_container"] = lib_container
+
+ obj_container = self._process(
+ libpath, container_name, None)
+
+ container_metadata["obj_container"] = obj_container
+
+ # Save the list of objects in the metadata container
+ container_metadata["objects"] = obj_container.all_objects
+
+ nodes = list(container.objects)
+ nodes.append(container)
+ self[:] = nodes
+ return nodes
+
+ def update(self, container: Dict, representation: Dict):
+ """Update the loaded asset.
+
+ This will remove all objects of the current collection, load the new
+ ones and add them to the collection.
+ If the objects of the collection are used in another collection they
+ will not be removed, only unlinked. Normally this should not be the
+ case though.
+
+ Warning:
+ No nested collections are supported at the moment!
+ """
+ collection = bpy.data.collections.get(
+ container["objectName"]
+ )
+ libpath = Path(api.get_representation_path(representation))
+ extension = libpath.suffix.lower()
+
+ self.log.info(
+ "Container: %s\nRepresentation: %s",
+ pformat(container, indent=2),
+ pformat(representation, indent=2),
+ )
+
+ assert collection, (
+ f"The asset is not loaded: {container['objectName']}"
+ )
+ assert not (collection.children), (
+ "Nested collections are not supported."
+ )
+ assert libpath, (
+ "No existing library file found for {container['objectName']}"
+ )
+ assert libpath.is_file(), (
+ f"The file doesn't exist: {libpath}"
+ )
+ assert extension in plugin.VALID_EXTENSIONS, (
+ f"Unsupported file: {libpath}"
+ )
+
+ collection_metadata = collection.get(
+ blender.pipeline.AVALON_PROPERTY)
+ collection_libpath = collection_metadata["libpath"]
+
+ obj_container = plugin.get_local_collection_with_name(
+ collection_metadata["obj_container"].name
+ )
+ objects = obj_container.all_objects
+
+ container_name = obj_container.name
+
+ normalized_collection_libpath = (
+ str(Path(bpy.path.abspath(collection_libpath)).resolve())
+ )
+ normalized_libpath = (
+ str(Path(bpy.path.abspath(str(libpath))).resolve())
+ )
+ self.log.debug(
+ "normalized_collection_libpath:\n %s\nnormalized_libpath:\n %s",
+ normalized_collection_libpath,
+ normalized_libpath,
+ )
+ if normalized_collection_libpath == normalized_libpath:
+ self.log.info("Library already loaded, not updating...")
+ return
+
+ parent = plugin.get_parent_collection(obj_container)
+
+ self._remove(objects, obj_container)
+
+ obj_container = self._process(
+ str(libpath), container_name, parent)
+
+ collection_metadata["obj_container"] = obj_container
+ collection_metadata["objects"] = obj_container.all_objects
+ collection_metadata["libpath"] = str(libpath)
+ collection_metadata["representation"] = str(representation["_id"])
+
+ def remove(self, container: Dict) -> bool:
+ """Remove an existing container from a Blender scene.
+
+ Arguments:
+ container (openpype:container-1.0): Container to remove,
+ from `host.ls()`.
+
+ Returns:
+ bool: Whether the container was deleted.
+
+ Warning:
+ No nested collections are supported at the moment!
+ """
+ collection = bpy.data.collections.get(
+ container["objectName"]
+ )
+ if not collection:
+ return False
+ assert not (collection.children), (
+ "Nested collections are not supported."
+ )
+
+ collection_metadata = collection.get(
+ blender.pipeline.AVALON_PROPERTY)
+
+ obj_container = plugin.get_local_collection_with_name(
+ collection_metadata["obj_container"].name
+ )
+ objects = obj_container.all_objects
+
+ self._remove(objects, obj_container)
+
+ bpy.data.collections.remove(collection)
+
+ return True
diff --git a/openpype/hosts/blender/plugins/load/load_model.py b/openpype/hosts/blender/plugins/load/load_model.py
index 7297e459a6..d645bedfcc 100644
--- a/openpype/hosts/blender/plugins/load/load_model.py
+++ b/openpype/hosts/blender/plugins/load/load_model.py
@@ -242,65 +242,3 @@ class BlendModelLoader(plugin.AssetLoader):
bpy.data.collections.remove(collection)
return True
-
-
-class CacheModelLoader(plugin.AssetLoader):
- """Load cache models.
-
- Stores the imported asset in a collection named after the asset.
-
- Note:
- At least for now it only supports Alembic files.
- """
-
- families = ["model"]
- representations = ["abc"]
-
- label = "Link Model"
- icon = "code-fork"
- color = "orange"
-
- def process_asset(
- self, context: dict, name: str, namespace: Optional[str] = None,
- options: Optional[Dict] = None
- ) -> Optional[List]:
- """
- Arguments:
- name: Use pre-defined name
- namespace: Use pre-defined namespace
- context: Full parenthood of representation to load
- options: Additional settings dictionary
- """
- raise NotImplementedError(
- "Loading of Alembic files is not yet implemented.")
- # TODO (jasper): implement Alembic import.
-
- libpath = self.fname
- asset = context["asset"]["name"]
- subset = context["subset"]["name"]
- # TODO (jasper): evaluate use of namespace which is 'alien' to Blender.
- lib_container = container_name = (
- plugin.asset_name(asset, subset, namespace)
- )
- relative = bpy.context.preferences.filepaths.use_relative_paths
-
- with bpy.data.libraries.load(
- libpath, link=True, relative=relative
- ) as (data_from, data_to):
- data_to.collections = [lib_container]
-
- scene = bpy.context.scene
- instance_empty = bpy.data.objects.new(
- container_name, None
- )
- scene.collection.objects.link(instance_empty)
- instance_empty.instance_type = 'COLLECTION'
- collection = bpy.data.collections[lib_container]
- collection.name = container_name
- instance_empty.instance_collection = collection
-
- nodes = list(collection.objects)
- nodes.append(collection)
- nodes.append(instance_empty)
- self[:] = nodes
- return nodes
diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py
index 6a89c6019b..a6315908fc 100644
--- a/openpype/hosts/blender/plugins/publish/extract_abc.py
+++ b/openpype/hosts/blender/plugins/publish/extract_abc.py
@@ -11,14 +11,14 @@ class ExtractABC(openpype.api.Extractor):
label = "Extract ABC"
hosts = ["blender"]
- families = ["model"]
+ families = ["model", "pointcache"]
optional = True
def process(self, instance):
# Define extract output file path
stagingdir = self.staging_dir(instance)
- filename = f"{instance.name}.fbx"
+ filename = f"{instance.name}.abc"
filepath = os.path.join(stagingdir, filename)
context = bpy.context
@@ -52,6 +52,8 @@ class ExtractABC(openpype.api.Extractor):
old_scale = scene.unit_settings.scale_length
+ bpy.ops.object.select_all(action='DESELECT')
+
selected = list()
for obj in instance:
@@ -67,14 +69,11 @@ class ExtractABC(openpype.api.Extractor):
# We set the scale of the scene for the export
scene.unit_settings.scale_length = 0.01
- self.log.info(new_context)
-
# We export the abc
bpy.ops.wm.alembic_export(
new_context,
filepath=filepath,
- start=1,
- end=1
+ selected=True
)
view_layer.active_layer_collection = old_active_layer_collection
diff --git a/openpype/hosts/hiero/api/__init__.py b/openpype/hosts/hiero/api/__init__.py
index fcb1d50ea8..8d0105ae5f 100644
--- a/openpype/hosts/hiero/api/__init__.py
+++ b/openpype/hosts/hiero/api/__init__.py
@@ -22,6 +22,7 @@ from .pipeline import (
)
from .lib import (
+ pype_tag_name,
get_track_items,
get_current_project,
get_current_sequence,
@@ -73,6 +74,7 @@ __all__ = [
"work_root",
# Lib functions
+ "pype_tag_name",
"get_track_items",
"get_current_project",
"get_current_sequence",
diff --git a/openpype/hosts/hiero/api/events.py b/openpype/hosts/hiero/api/events.py
index c02e3e2ac4..3df095f9e4 100644
--- a/openpype/hosts/hiero/api/events.py
+++ b/openpype/hosts/hiero/api/events.py
@@ -2,7 +2,12 @@ import os
import hiero.core.events
import avalon.api as avalon
from openpype.api import Logger
-from .lib import sync_avalon_data_to_workfile, launch_workfiles_app
+from .lib import (
+ sync_avalon_data_to_workfile,
+ launch_workfiles_app,
+ selection_changed_timeline,
+ before_project_save
+)
from .tags import add_tags_to_workfile
from .menu import update_menu_task_label
@@ -78,7 +83,7 @@ def register_hiero_events():
"Registering events for: kBeforeNewProjectCreated, "
"kAfterNewProjectCreated, kBeforeProjectLoad, kAfterProjectLoad, "
"kBeforeProjectSave, kAfterProjectSave, kBeforeProjectClose, "
- "kAfterProjectClose, kShutdown, kStartup"
+ "kAfterProjectClose, kShutdown, kStartup, kSelectionChanged"
)
# hiero.core.events.registerInterest(
@@ -91,8 +96,8 @@ def register_hiero_events():
hiero.core.events.registerInterest(
"kAfterProjectLoad", afterProjectLoad)
- # hiero.core.events.registerInterest(
- # "kBeforeProjectSave", beforeProjectSaved)
+ hiero.core.events.registerInterest(
+ "kBeforeProjectSave", before_project_save)
# hiero.core.events.registerInterest(
# "kAfterProjectSave", afterProjectSaved)
#
@@ -104,10 +109,16 @@ def register_hiero_events():
# hiero.core.events.registerInterest("kShutdown", shutDown)
# hiero.core.events.registerInterest("kStartup", startupCompleted)
- # workfiles
- hiero.core.events.registerEventType("kStartWorkfiles")
- hiero.core.events.registerInterest("kStartWorkfiles", launch_workfiles_app)
+ hiero.core.events.registerInterest(
+ ("kSelectionChanged", "kTimeline"), selection_changed_timeline)
+ # workfiles
+ try:
+ hiero.core.events.registerEventType("kStartWorkfiles")
+ hiero.core.events.registerInterest(
+ "kStartWorkfiles", launch_workfiles_app)
+ except RuntimeError:
+ pass
def register_events():
"""
diff --git a/openpype/hosts/hiero/api/lib.py b/openpype/hosts/hiero/api/lib.py
index b74e70cae3..a9982d96c4 100644
--- a/openpype/hosts/hiero/api/lib.py
+++ b/openpype/hosts/hiero/api/lib.py
@@ -9,7 +9,7 @@ import hiero
import avalon.api as avalon
import avalon.io
from avalon.vendor.Qt import QtWidgets
-from openpype.api import (Logger, Anatomy, config)
+from openpype.api import (Logger, Anatomy, get_anatomy_settings)
from . import tags
import shutil
from compiler.ast import flatten
@@ -30,9 +30,9 @@ self = sys.modules[__name__]
self._has_been_setup = False
self._has_menu = False
self._registered_gui = None
-self.pype_tag_name = "Pype Data"
-self.default_sequence_name = "PypeSequence"
-self.default_bin_name = "PypeBin"
+self.pype_tag_name = "openpypeData"
+self.default_sequence_name = "openpypeSequence"
+self.default_bin_name = "openpypeBin"
AVALON_CONFIG = os.getenv("AVALON_CONFIG", "pype")
@@ -150,15 +150,27 @@ def get_track_items(
# get selected track items or all in active sequence
if selected:
- selected_items = list(hiero.selection)
- for item in selected_items:
- if track_name and track_name in item.parent().name():
- # filter only items fitting input track name
- track_items.append(item)
- elif not track_name:
- # or add all if no track_name was defined
- track_items.append(item)
- else:
+ try:
+ selected_items = list(hiero.selection)
+ for item in selected_items:
+ if track_name and track_name in item.parent().name():
+ # filter only items fitting input track name
+ track_items.append(item)
+ elif not track_name:
+ # or add all if no track_name was defined
+ track_items.append(item)
+ except AttributeError:
+ pass
+
+ # check if any collected track items are
+ # `core.Hiero.Python.TrackItem` instance
+ if track_items:
+ any_track_item = track_items[0]
+ if not isinstance(any_track_item, hiero.core.TrackItem):
+ selected_items = []
+
+ # collect all available active sequence track items
+ if not track_items:
sequence = get_current_sequence(name=sequence_name)
# get all available tracks from sequence
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
@@ -240,7 +252,7 @@ def set_track_item_pype_tag(track_item, data=None):
# basic Tag's attribute
tag_data = {
"editable": "0",
- "note": "Pype data holder",
+ "note": "OpenPype data container",
"icon": "openpype_icon.png",
"metadata": {k: v for k, v in data.items()}
}
@@ -744,10 +756,13 @@ def _set_hrox_project_knobs(doc, **knobs):
# set attributes to Project Tag
proj_elem = doc.documentElement().firstChildElement("Project")
for k, v in knobs.items():
- proj_elem.setAttribute(k, v)
+ if isinstance(v, dict):
+ continue
+ proj_elem.setAttribute(str(k), v)
def apply_colorspace_project():
+ project_name = os.getenv("AVALON_PROJECT")
# get path the the active projects
project = get_current_project(remove_untitled=True)
current_file = project.path()
@@ -756,9 +771,9 @@ def apply_colorspace_project():
project.close()
# get presets for hiero
- presets = config.get_init_presets()
- colorspace = presets["colorspace"]
- hiero_project_clrs = colorspace.get("hiero", {}).get("project", {})
+ imageio = get_anatomy_settings(
+ project_name)["imageio"].get("hiero", None)
+ presets = imageio.get("workfile")
# save the workfile as subversion "comment:_colorspaceChange"
split_current_file = os.path.splitext(current_file)
@@ -789,13 +804,13 @@ def apply_colorspace_project():
os.remove(copy_current_file_tmp)
# use the code from bellow for changing xml hrox Attributes
- hiero_project_clrs.update({"name": os.path.basename(copy_current_file)})
+ presets.update({"name": os.path.basename(copy_current_file)})
# read HROX in as QDomSocument
doc = _read_doc_from_path(copy_current_file)
# apply project colorspace properties
- _set_hrox_project_knobs(doc, **hiero_project_clrs)
+ _set_hrox_project_knobs(doc, **presets)
# write QDomSocument back as HROX
_write_doc_to_path(doc, copy_current_file)
@@ -805,14 +820,17 @@ def apply_colorspace_project():
def apply_colorspace_clips():
+ project_name = os.getenv("AVALON_PROJECT")
project = get_current_project(remove_untitled=True)
clips = project.clips()
# get presets for hiero
- presets = config.get_init_presets()
- colorspace = presets["colorspace"]
- hiero_clips_clrs = colorspace.get("hiero", {}).get("clips", {})
+ imageio = get_anatomy_settings(
+ project_name)["imageio"].get("hiero", None)
+ from pprint import pprint
+ presets = imageio.get("regexInputs", {}).get("inputs", {})
+ pprint(presets)
for clip in clips:
clip_media_source_path = clip.mediaSource().firstpath()
clip_name = clip.name()
@@ -822,10 +840,11 @@ def apply_colorspace_clips():
continue
# check if any colorspace presets for read is mathing
- preset_clrsp = next((hiero_clips_clrs[k]
- for k in hiero_clips_clrs
- if bool(re.search(k, clip_media_source_path))),
- None)
+ preset_clrsp = None
+ for k in presets:
+ if not bool(re.search(k["regex"], clip_media_source_path)):
+ continue
+ preset_clrsp = k["colorspace"]
if preset_clrsp:
log.debug("Changing clip.path: {}".format(clip_media_source_path))
@@ -893,3 +912,61 @@ def get_sequence_pattern_and_padding(file):
return found, padding
else:
return None, None
+
+
+def sync_clip_name_to_data_asset(track_items_list):
+ # loop trough all selected clips
+ for track_item in track_items_list:
+ # ignore if parent track is locked or disabled
+ if track_item.parent().isLocked():
+ continue
+ if not track_item.parent().isEnabled():
+ continue
+ # ignore if the track item is disabled
+ if not track_item.isEnabled():
+ continue
+
+ # get name and data
+ ti_name = track_item.name()
+ data = get_track_item_pype_data(track_item)
+
+ # ignore if no data on the clip or not publish instance
+ if not data:
+ continue
+ if data.get("id") != "pyblish.avalon.instance":
+ continue
+
+ # fix data if wrong name
+ if data["asset"] != ti_name:
+ data["asset"] = ti_name
+ # remove the original tag
+ tag = get_track_item_pype_tag(track_item)
+ track_item.removeTag(tag)
+ # create new tag with updated data
+ set_track_item_pype_tag(track_item, data)
+ print("asset was changed in clip: {}".format(ti_name))
+
+
+def selection_changed_timeline(event):
+ """Callback on timeline to check if asset in data is the same as clip name.
+
+ Args:
+ event (hiero.core.Event): timeline event
+ """
+ timeline_editor = event.sender
+ selection = timeline_editor.selection()
+
+ # run checking function
+ sync_clip_name_to_data_asset(selection)
+
+
+def before_project_save(event):
+ track_items = get_track_items(
+ selected=False,
+ track_type="video",
+ check_enabled=True,
+ check_locked=True,
+ check_tagged=True)
+
+ # run checking function
+ sync_clip_name_to_data_asset(track_items)
diff --git a/openpype/hosts/hiero/api/menu.py b/openpype/hosts/hiero/api/menu.py
index 9ccf5e39d1..ab49251093 100644
--- a/openpype/hosts/hiero/api/menu.py
+++ b/openpype/hosts/hiero/api/menu.py
@@ -68,50 +68,45 @@ def menu_install():
menu.addSeparator()
- workfiles_action = menu.addAction("Work Files...")
+ workfiles_action = menu.addAction("Work Files ...")
workfiles_action.setIcon(QtGui.QIcon("icons:Position.png"))
workfiles_action.triggered.connect(launch_workfiles_app)
- default_tags_action = menu.addAction("Create Default Tags...")
+ default_tags_action = menu.addAction("Create Default Tags")
default_tags_action.setIcon(QtGui.QIcon("icons:Position.png"))
default_tags_action.triggered.connect(tags.add_tags_to_workfile)
menu.addSeparator()
- publish_action = menu.addAction("Publish...")
+ publish_action = menu.addAction("Publish ...")
publish_action.setIcon(QtGui.QIcon("icons:Output.png"))
publish_action.triggered.connect(
lambda *args: publish(hiero.ui.mainWindow())
)
- creator_action = menu.addAction("Create...")
+ creator_action = menu.addAction("Create ...")
creator_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
creator_action.triggered.connect(creator.show)
- loader_action = menu.addAction("Load...")
+ loader_action = menu.addAction("Load ...")
loader_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
loader_action.triggered.connect(cbloader.show)
- sceneinventory_action = menu.addAction("Manage...")
+ sceneinventory_action = menu.addAction("Manage ...")
sceneinventory_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
sceneinventory_action.triggered.connect(sceneinventory.show)
menu.addSeparator()
- reload_action = menu.addAction("Reload pipeline...")
- reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
- reload_action.triggered.connect(reload_config)
+ if os.getenv("OPENPYPE_DEVELOP"):
+ reload_action = menu.addAction("Reload pipeline")
+ reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
+ reload_action.triggered.connect(reload_config)
menu.addSeparator()
- apply_colorspace_p_action = menu.addAction("Apply Colorspace Project...")
+ apply_colorspace_p_action = menu.addAction("Apply Colorspace Project")
apply_colorspace_p_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
apply_colorspace_p_action.triggered.connect(apply_colorspace_project)
- apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips...")
+ apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips")
apply_colorspace_c_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
apply_colorspace_c_action.triggered.connect(apply_colorspace_clips)
-
- self.context_label_action = context_label_action
- self.workfile_actions = workfiles_action
- self.default_tags_action = default_tags_action
- self.publish_action = publish_action
- self.reload_action = reload_action
diff --git a/openpype/hosts/hiero/api/plugin.py b/openpype/hosts/hiero/api/plugin.py
index 92e15cfae4..c46ef9abfa 100644
--- a/openpype/hosts/hiero/api/plugin.py
+++ b/openpype/hosts/hiero/api/plugin.py
@@ -4,10 +4,10 @@ import hiero
from Qt import QtWidgets, QtCore
from avalon.vendor import qargparse
import avalon.api as avalon
-import openpype.api as pype
+import openpype.api as openpype
from . import lib
-log = pype.Logger().get_logger(__name__)
+log = openpype.Logger().get_logger(__name__)
def load_stylesheet():
@@ -266,7 +266,8 @@ class CreatorWidget(QtWidgets.QDialog):
elif v["type"] == "QSpinBox":
data[k]["value"] = self.create_row(
content_layout, "QSpinBox", v["label"],
- setValue=v["value"], setMaximum=10000, setToolTip=tool_tip)
+ setValue=v["value"], setMinimum=0,
+ setMaximum=100000, setToolTip=tool_tip)
return data
@@ -387,7 +388,8 @@ class ClipLoader:
# try to get value from options or evaluate key value for `load_to`
self.new_sequence = options.get("newSequence") or bool(
"New timeline" in options.get("load_to", ""))
-
+ self.clip_name_template = options.get(
+ "clipNameTemplate") or "{asset}_{subset}_{representation}"
assert self._populate_data(), str(
"Cannot Load selected data, look into database "
"or call your supervisor")
@@ -432,7 +434,7 @@ class ClipLoader:
asset = str(repr_cntx["asset"])
subset = str(repr_cntx["subset"])
representation = str(repr_cntx["representation"])
- self.data["clip_name"] = "_".join([asset, subset, representation])
+ self.data["clip_name"] = self.clip_name_template.format(**repr_cntx)
self.data["track_name"] = "_".join([subset, representation])
self.data["versionData"] = self.context["version"]["data"]
# gets file path
@@ -476,7 +478,7 @@ class ClipLoader:
"""
asset_name = self.context["representation"]["context"]["asset"]
- self.data["assetData"] = pype.get_asset(asset_name)["data"]
+ self.data["assetData"] = openpype.get_asset(asset_name)["data"]
def _make_track_item(self, source_bin_item, audio=False):
""" Create track item with """
@@ -543,15 +545,9 @@ class ClipLoader:
if "slate" in f),
# if nothing was found then use default None
# so other bool could be used
- None) or bool(((
- # put together duration of clip attributes
- self.timeline_out - self.timeline_in + 1) \
- + self.handle_start \
- + self.handle_end
- # and compare it with meda duration
- ) > self.media_duration)
-
- print("__ slate_on: `{}`".format(slate_on))
+ None) or bool(int(
+ (self.timeline_out - self.timeline_in + 1)
+ + self.handle_start + self.handle_end) < self.media_duration)
# if slate is on then remove the slate frame from begining
if slate_on:
@@ -592,7 +588,7 @@ class ClipLoader:
return track_item
-class Creator(pype.Creator):
+class Creator(openpype.Creator):
"""Creator class wrapper
"""
clip_color = "Purple"
@@ -601,7 +597,7 @@ class Creator(pype.Creator):
def __init__(self, *args, **kwargs):
import openpype.hosts.hiero.api as phiero
super(Creator, self).__init__(*args, **kwargs)
- self.presets = pype.get_current_project_settings()[
+ self.presets = openpype.get_current_project_settings()[
"hiero"]["create"].get(self.__class__.__name__, {})
# adding basic current context resolve objects
@@ -674,6 +670,9 @@ class PublishClip:
if kwargs.get("avalon"):
self.tag_data.update(kwargs["avalon"])
+ # add publish attribute to tag data
+ self.tag_data.update({"publish": True})
+
# adding ui inputs if any
self.ui_inputs = kwargs.get("ui_inputs", {})
@@ -687,6 +686,7 @@ class PublishClip:
self._create_parents()
def convert(self):
+
# solve track item data and add them to tag data
self._convert_to_tag_data()
@@ -705,6 +705,12 @@ class PublishClip:
self.tag_data["asset"] = new_name
else:
self.tag_data["asset"] = self.ti_name
+ self.tag_data["hierarchyData"]["shot"] = self.ti_name
+
+ if self.tag_data["heroTrack"] and self.review_layer:
+ self.tag_data.update({"reviewTrack": self.review_layer})
+ else:
+ self.tag_data.update({"reviewTrack": None})
# create pype tag on track_item and add data
lib.imprint(self.track_item, self.tag_data)
@@ -773,8 +779,8 @@ class PublishClip:
_spl = text.split("#")
_len = (len(_spl) - 1)
_repl = "{{{0}:0>{1}}}".format(name, _len)
- new_text = text.replace(("#" * _len), _repl)
- return new_text
+ return text.replace(("#" * _len), _repl)
+
def _convert_to_tag_data(self):
""" Convert internal data to tag data.
@@ -782,13 +788,13 @@ class PublishClip:
Populating the tag data into internal variable self.tag_data
"""
# define vertical sync attributes
- master_layer = True
+ hero_track = True
self.review_layer = ""
if self.vertical_sync:
# check if track name is not in driving layer
if self.track_name not in self.driving_layer:
# if it is not then define vertical sync as None
- master_layer = False
+ hero_track = False
# increasing steps by index of rename iteration
self.count_steps *= self.rename_index
@@ -802,7 +808,7 @@ class PublishClip:
self.tag_data[_k] = _v["value"]
# driving layer is set as positive match
- if master_layer or self.vertical_sync:
+ if hero_track or self.vertical_sync:
# mark review layer
if self.review_track and (
self.review_track not in self.review_track_default):
@@ -836,40 +842,40 @@ class PublishClip:
hierarchy_formating_data
)
- tag_hierarchy_data.update({"masterLayer": True})
- if master_layer and self.vertical_sync:
+ tag_hierarchy_data.update({"heroTrack": True})
+ if hero_track and self.vertical_sync:
self.vertical_clip_match.update({
(self.clip_in, self.clip_out): tag_hierarchy_data
})
- if not master_layer and self.vertical_sync:
+ if not hero_track and self.vertical_sync:
# driving layer is set as negative match
- for (_in, _out), master_data in self.vertical_clip_match.items():
- master_data.update({"masterLayer": False})
+ for (_in, _out), hero_data in self.vertical_clip_match.items():
+ hero_data.update({"heroTrack": False})
if _in == self.clip_in and _out == self.clip_out:
- data_subset = master_data["subset"]
- # add track index in case duplicity of names in master data
+ data_subset = hero_data["subset"]
+ # add track index in case duplicity of names in hero data
if self.subset in data_subset:
- master_data["subset"] = self.subset + str(
+ hero_data["subset"] = self.subset + str(
self.track_index)
# in case track name and subset name is the same then add
if self.subset_name == self.track_name:
- master_data["subset"] = self.subset
+ hero_data["subset"] = self.subset
# assing data to return hierarchy data to tag
- tag_hierarchy_data = master_data
+ tag_hierarchy_data = hero_data
# add data to return data dict
self.tag_data.update(tag_hierarchy_data)
- if master_layer and self.review_layer:
- self.tag_data.update({"reviewTrack": self.review_layer})
-
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
""" Solve tag data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
+ # remove shot from hierarchy data: is not needed anymore
+ hierarchy_formating_data.pop("shot")
+
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,
diff --git a/openpype/hosts/hiero/api/tags.py b/openpype/hosts/hiero/api/tags.py
index 06fa655a2e..d2502f3c71 100644
--- a/openpype/hosts/hiero/api/tags.py
+++ b/openpype/hosts/hiero/api/tags.py
@@ -84,6 +84,13 @@ def update_tag(tag, data):
mtd = tag.metadata()
# get metadata key from data
data_mtd = data.get("metadata", {})
+
+ # due to hiero bug we have to make sure keys which are not existent in
+ # data are cleared of value by `None`
+ for _mk in mtd.keys():
+ if _mk.replace("tag.", "") not in data_mtd.keys():
+ mtd.setValue(_mk, str(None))
+
# set all data metadata to tag metadata
for k, v in data_mtd.items():
mtd.setValue(
diff --git a/openpype/hosts/hiero/otio/__init__.py b/openpype/hosts/hiero/otio/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/openpype/hosts/hiero/otio/hiero_export.py b/openpype/hosts/hiero/otio/hiero_export.py
new file mode 100644
index 0000000000..6e751d3aa4
--- /dev/null
+++ b/openpype/hosts/hiero/otio/hiero_export.py
@@ -0,0 +1,366 @@
+""" compatibility OpenTimelineIO 0.12.0 and newer
+"""
+
+import os
+import re
+import sys
+import ast
+from compiler.ast import flatten
+import opentimelineio as otio
+from . import utils
+import hiero.core
+import hiero.ui
+
+self = sys.modules[__name__]
+self.track_types = {
+ hiero.core.VideoTrack: otio.schema.TrackKind.Video,
+ hiero.core.AudioTrack: otio.schema.TrackKind.Audio
+}
+self.project_fps = None
+self.marker_color_map = {
+ "magenta": otio.schema.MarkerColor.MAGENTA,
+ "red": otio.schema.MarkerColor.RED,
+ "yellow": otio.schema.MarkerColor.YELLOW,
+ "green": otio.schema.MarkerColor.GREEN,
+ "cyan": otio.schema.MarkerColor.CYAN,
+ "blue": otio.schema.MarkerColor.BLUE,
+}
+self.timeline = None
+self.include_tags = True
+
+
+def get_current_hiero_project(remove_untitled=False):
+ projects = flatten(hiero.core.projects())
+ if not remove_untitled:
+ return next(iter(projects))
+
+ # if remove_untitled
+ for proj in projects:
+ if "Untitled" in proj.name():
+ proj.close()
+ else:
+ return proj
+
+
+def create_otio_rational_time(frame, fps):
+ return otio.opentime.RationalTime(
+ float(frame),
+ float(fps)
+ )
+
+
+def create_otio_time_range(start_frame, frame_duration, fps):
+ return otio.opentime.TimeRange(
+ start_time=create_otio_rational_time(start_frame, fps),
+ duration=create_otio_rational_time(frame_duration, fps)
+ )
+
+
+def _get_metadata(item):
+ if hasattr(item, 'metadata'):
+ return {key: value for key, value in dict(item.metadata()).items()}
+ return {}
+
+
+def create_otio_reference(clip):
+ metadata = _get_metadata(clip)
+ media_source = clip.mediaSource()
+
+ # get file info for path and start frame
+ file_info = media_source.fileinfos().pop()
+ frame_start = file_info.startFrame()
+ path = file_info.filename()
+
+ # get padding and other file infos
+ padding = media_source.filenamePadding()
+ file_head = media_source.filenameHead()
+ is_sequence = not media_source.singleFile()
+ frame_duration = media_source.duration()
+ fps = utils.get_rate(clip) or self.project_fps
+ extension = os.path.splitext(path)[-1]
+
+ if is_sequence:
+ metadata.update({
+ "isSequence": True,
+ "padding": padding
+ })
+
+ # add resolution metadata
+ metadata.update({
+ "openpype.source.colourtransform": clip.sourceMediaColourTransform(),
+ "openpype.source.width": int(media_source.width()),
+ "openpype.source.height": int(media_source.height()),
+ "openpype.source.pixelAspect": float(media_source.pixelAspect())
+ })
+
+ otio_ex_ref_item = None
+
+ if is_sequence:
+ # if it is file sequence try to create `ImageSequenceReference`
+ # the OTIO might not be compatible so return nothing and do it old way
+ try:
+ dirname = os.path.dirname(path)
+ otio_ex_ref_item = otio.schema.ImageSequenceReference(
+ target_url_base=dirname + os.sep,
+ name_prefix=file_head,
+ name_suffix=extension,
+ start_frame=frame_start,
+ frame_zero_padding=padding,
+ rate=fps,
+ available_range=create_otio_time_range(
+ frame_start,
+ frame_duration,
+ fps
+ )
+ )
+ except AttributeError:
+ pass
+
+ if not otio_ex_ref_item:
+ reformat_path = utils.get_reformated_path(path, padded=False)
+ # in case old OTIO or video file create `ExternalReference`
+ otio_ex_ref_item = otio.schema.ExternalReference(
+ target_url=reformat_path,
+ available_range=create_otio_time_range(
+ frame_start,
+ frame_duration,
+ fps
+ )
+ )
+
+ # add metadata to otio item
+ add_otio_metadata(otio_ex_ref_item, media_source, **metadata)
+
+ return otio_ex_ref_item
+
+
+def get_marker_color(tag):
+ icon = tag.icon()
+ pat = r'icons:Tag(?P\w+)\.\w+'
+
+ res = re.search(pat, icon)
+ if res:
+ color = res.groupdict().get('color')
+ if color.lower() in self.marker_color_map:
+ return self.marker_color_map[color.lower()]
+
+ return otio.schema.MarkerColor.RED
+
+
+def create_otio_markers(otio_item, item):
+ for tag in item.tags():
+ if not tag.visible():
+ continue
+
+ if tag.name() == 'Copy':
+ # Hiero adds this tag to a lot of clips
+ continue
+
+ frame_rate = utils.get_rate(item) or self.project_fps
+
+ marked_range = otio.opentime.TimeRange(
+ start_time=otio.opentime.RationalTime(
+ tag.inTime(),
+ frame_rate
+ ),
+ duration=otio.opentime.RationalTime(
+ int(tag.metadata().dict().get('tag.length', '0')),
+ frame_rate
+ )
+ )
+ # add tag metadata but remove "tag." string
+ metadata = {}
+
+ for key, value in tag.metadata().dict().items():
+ _key = key.replace("tag.", "")
+
+ try:
+ # capture exceptions which are related to strings only
+ _value = ast.literal_eval(value)
+ except (ValueError, SyntaxError):
+ _value = value
+
+ metadata.update({_key: _value})
+
+ # Store the source item for future import assignment
+ metadata['hiero_source_type'] = item.__class__.__name__
+
+ marker = otio.schema.Marker(
+ name=tag.name(),
+ color=get_marker_color(tag),
+ marked_range=marked_range,
+ metadata=metadata
+ )
+
+ otio_item.markers.append(marker)
+
+
+def create_otio_clip(track_item):
+ clip = track_item.source()
+ source_in = track_item.sourceIn()
+ duration = track_item.sourceDuration()
+ fps = utils.get_rate(track_item) or self.project_fps
+ name = track_item.name()
+
+ media_reference = create_otio_reference(clip)
+ source_range = create_otio_time_range(
+ int(source_in),
+ int(duration),
+ fps
+ )
+
+ otio_clip = otio.schema.Clip(
+ name=name,
+ source_range=source_range,
+ media_reference=media_reference
+ )
+
+ # Add tags as markers
+ if self.include_tags:
+ create_otio_markers(otio_clip, track_item)
+ create_otio_markers(otio_clip, track_item.source())
+
+ return otio_clip
+
+
+def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
+ return otio.schema.Gap(
+ source_range=create_otio_time_range(
+ gap_start,
+ (clip_start - tl_start_frame) - gap_start,
+ fps
+ )
+ )
+
+
+def _create_otio_timeline():
+ project = get_current_hiero_project(remove_untitled=False)
+ metadata = _get_metadata(self.timeline)
+
+ metadata.update({
+ "openpype.timeline.width": int(self.timeline.format().width()),
+ "openpype.timeline.height": int(self.timeline.format().height()),
+ "openpype.timeline.pixelAspect": int(self.timeline.format().pixelAspect()), # noqa
+ "openpype.project.useOCIOEnvironmentOverride": project.useOCIOEnvironmentOverride(), # noqa
+ "openpype.project.lutSetting16Bit": project.lutSetting16Bit(),
+ "openpype.project.lutSetting8Bit": project.lutSetting8Bit(),
+ "openpype.project.lutSettingFloat": project.lutSettingFloat(),
+ "openpype.project.lutSettingLog": project.lutSettingLog(),
+ "openpype.project.lutSettingViewer": project.lutSettingViewer(),
+ "openpype.project.lutSettingWorkingSpace": project.lutSettingWorkingSpace(), # noqa
+ "openpype.project.lutUseOCIOForExport": project.lutUseOCIOForExport(),
+ "openpype.project.ocioConfigName": project.ocioConfigName(),
+ "openpype.project.ocioConfigPath": project.ocioConfigPath()
+ })
+
+ start_time = create_otio_rational_time(
+ self.timeline.timecodeStart(), self.project_fps)
+
+ return otio.schema.Timeline(
+ name=self.timeline.name(),
+ global_start_time=start_time,
+ metadata=metadata
+ )
+
+
+def create_otio_track(track_type, track_name):
+ return otio.schema.Track(
+ name=track_name,
+ kind=self.track_types[track_type]
+ )
+
+
+def add_otio_gap(track_item, otio_track, prev_out):
+ gap_length = track_item.timelineIn() - prev_out
+ if prev_out != 0:
+ gap_length -= 1
+
+ gap = otio.opentime.TimeRange(
+ duration=otio.opentime.RationalTime(
+ gap_length,
+ self.project_fps
+ )
+ )
+ otio_gap = otio.schema.Gap(source_range=gap)
+ otio_track.append(otio_gap)
+
+
+def add_otio_metadata(otio_item, media_source, **kwargs):
+ metadata = _get_metadata(media_source)
+
+ # add additional metadata from kwargs
+ if kwargs:
+ metadata.update(kwargs)
+
+ # add metadata to otio item metadata
+ for key, value in metadata.items():
+ otio_item.metadata.update({key: value})
+
+
+def create_otio_timeline():
+
+ # get current timeline
+ self.timeline = hiero.ui.activeSequence()
+ self.project_fps = self.timeline.framerate().toFloat()
+
+ # convert timeline to otio
+ otio_timeline = _create_otio_timeline()
+
+ # loop all defined track types
+ for track in self.timeline.items():
+ # skip if track is disabled
+ if not track.isEnabled():
+ continue
+
+ # convert track to otio
+ otio_track = create_otio_track(
+ type(track), track.name())
+
+ for itemindex, track_item in enumerate(track):
+ # skip offline track items
+ if not track_item.isMediaPresent():
+ continue
+
+ # skip if track item is disabled
+ if not track_item.isEnabled():
+ continue
+
+ # Add Gap if needed
+ if itemindex == 0:
+ # if it is first track item at track then add
+ # it to previouse item
+ prev_item = track_item
+
+ else:
+ # get previouse item
+ prev_item = track_item.parent().items()[itemindex - 1]
+
+ # calculate clip frame range difference from each other
+ clip_diff = track_item.timelineIn() - prev_item.timelineOut()
+
+ # add gap if first track item is not starting
+ # at first timeline frame
+ if itemindex == 0 and track_item.timelineIn() > 0:
+ add_otio_gap(track_item, otio_track, 0)
+
+ # or add gap if following track items are having
+ # frame range differences from each other
+ elif itemindex and clip_diff != 1:
+ add_otio_gap(track_item, otio_track, prev_item.timelineOut())
+
+ # create otio clip and add it to track
+ otio_clip = create_otio_clip(track_item)
+ otio_track.append(otio_clip)
+
+ # Add tags as markers
+ if self.include_tags:
+ create_otio_markers(otio_track, track)
+
+ # add track to otio timeline
+ otio_timeline.tracks.append(otio_track)
+
+ return otio_timeline
+
+
+def write_to_file(otio_timeline, path):
+ otio.adapters.write_to_file(otio_timeline, path)
diff --git a/openpype/hosts/hiero/otio/hiero_import.py b/openpype/hosts/hiero/otio/hiero_import.py
new file mode 100644
index 0000000000..257c434011
--- /dev/null
+++ b/openpype/hosts/hiero/otio/hiero_import.py
@@ -0,0 +1,545 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+__author__ = "Daniel Flehner Heen"
+__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
+
+
+import os
+import hiero.core
+import hiero.ui
+
+import PySide2.QtWidgets as qw
+
+try:
+ from urllib import unquote
+
+except ImportError:
+ from urllib.parse import unquote # lint:ok
+
+import opentimelineio as otio
+
+_otio_old = False
+
+
+def inform(messages):
+ if isinstance(messages, type('')):
+ messages = [messages]
+
+ qw.QMessageBox.information(
+ hiero.ui.mainWindow(),
+ 'OTIO Import',
+ '\n'.join(messages),
+ qw.QMessageBox.StandardButton.Ok
+ )
+
+
+def get_transition_type(otio_item, otio_track):
+ _in, _out = otio_track.neighbors_of(otio_item)
+
+ if isinstance(_in, otio.schema.Gap):
+ _in = None
+
+ if isinstance(_out, otio.schema.Gap):
+ _out = None
+
+ if _in and _out:
+ return 'dissolve'
+
+ elif _in and not _out:
+ return 'fade_out'
+
+ elif not _in and _out:
+ return 'fade_in'
+
+ else:
+ return 'unknown'
+
+
+def find_trackitem(otio_clip, hiero_track):
+ for item in hiero_track.items():
+ if item.timelineIn() == otio_clip.range_in_parent().start_time.value:
+ if item.name() == otio_clip.name:
+ return item
+
+ return None
+
+
+def get_neighboring_trackitems(otio_item, otio_track, hiero_track):
+ _in, _out = otio_track.neighbors_of(otio_item)
+ trackitem_in = None
+ trackitem_out = None
+
+ if _in:
+ trackitem_in = find_trackitem(_in, hiero_track)
+
+ if _out:
+ trackitem_out = find_trackitem(_out, hiero_track)
+
+ return trackitem_in, trackitem_out
+
+
+def apply_transition(otio_track, otio_item, track):
+ warning = None
+
+ # Figure out type of transition
+ transition_type = get_transition_type(otio_item, otio_track)
+
+ # Figure out track kind for getattr below
+ kind = ''
+ if isinstance(track, hiero.core.AudioTrack):
+ kind = 'Audio'
+
+ # Gather TrackItems involved in trasition
+ item_in, item_out = get_neighboring_trackitems(
+ otio_item,
+ otio_track,
+ track
+ )
+
+ # Create transition object
+ if transition_type == 'dissolve':
+ transition_func = getattr(
+ hiero.core.Transition,
+ 'create{kind}DissolveTransition'.format(kind=kind)
+ )
+
+ try:
+ transition = transition_func(
+ item_in,
+ item_out,
+ otio_item.in_offset.value,
+ otio_item.out_offset.value
+ )
+
+ # Catch error raised if transition is bigger than TrackItem source
+ except RuntimeError as e:
+ transition = None
+ warning = (
+ "Unable to apply transition \"{t.name}\": {e} "
+ "Ignoring the transition.").format(t=otio_item, e=str(e))
+
+ elif transition_type == 'fade_in':
+ transition_func = getattr(
+ hiero.core.Transition,
+ 'create{kind}FadeInTransition'.format(kind=kind)
+ )
+
+ # Warn user if part of fade is outside of clip
+ if otio_item.in_offset.value:
+ warning = \
+ 'Fist half of transition "{t.name}" is outside of clip and ' \
+ 'not valid in Hiero. Only applied second half.' \
+ .format(t=otio_item)
+
+ transition = transition_func(
+ item_out,
+ otio_item.out_offset.value
+ )
+
+ elif transition_type == 'fade_out':
+ transition_func = getattr(
+ hiero.core.Transition,
+ 'create{kind}FadeOutTransition'.format(kind=kind)
+ )
+ transition = transition_func(
+ item_in,
+ otio_item.in_offset.value
+ )
+
+ # Warn user if part of fade is outside of clip
+ if otio_item.out_offset.value:
+ warning = \
+ 'Second half of transition "{t.name}" is outside of clip ' \
+ 'and not valid in Hiero. Only applied first half.' \
+ .format(t=otio_item)
+
+ else:
+ # Unknown transition
+ return
+
+ # Apply transition to track
+ if transition:
+ track.addTransition(transition)
+
+ # Inform user about missing or adjusted transitions
+ return warning
+
+
+def prep_url(url_in):
+ url = unquote(url_in)
+
+ if url.startswith('file://localhost/'):
+ return url
+
+ url = 'file://localhost{sep}{url}'.format(
+ sep=url.startswith(os.sep) and '' or os.sep,
+ url=url.startswith(os.sep) and url[1:] or url
+ )
+
+ return url
+
+
+def create_offline_mediasource(otio_clip, path=None):
+ global _otio_old
+
+ hiero_rate = hiero.core.TimeBase(
+ otio_clip.source_range.start_time.rate
+ )
+
+ try:
+ legal_media_refs = (
+ otio.schema.ExternalReference,
+ otio.schema.ImageSequenceReference
+ )
+ except AttributeError:
+ _otio_old = True
+ legal_media_refs = (
+ otio.schema.ExternalReference
+ )
+
+ if isinstance(otio_clip.media_reference, legal_media_refs):
+ source_range = otio_clip.available_range()
+
+ else:
+ source_range = otio_clip.source_range
+
+ if path is None:
+ path = otio_clip.name
+
+ media = hiero.core.MediaSource.createOfflineVideoMediaSource(
+ prep_url(path),
+ source_range.start_time.value,
+ source_range.duration.value,
+ hiero_rate,
+ source_range.start_time.value
+ )
+
+ return media
+
+
+def load_otio(otio_file, project=None, sequence=None):
+ otio_timeline = otio.adapters.read_from_file(otio_file)
+ build_sequence(otio_timeline, project=project, sequence=sequence)
+
+
+marker_color_map = {
+ "PINK": "Magenta",
+ "RED": "Red",
+ "ORANGE": "Yellow",
+ "YELLOW": "Yellow",
+ "GREEN": "Green",
+ "CYAN": "Cyan",
+ "BLUE": "Blue",
+ "PURPLE": "Magenta",
+ "MAGENTA": "Magenta",
+ "BLACK": "Blue",
+ "WHITE": "Green"
+}
+
+
+def get_tag(tagname, tagsbin):
+ for tag in tagsbin.items():
+ if tag.name() == tagname:
+ return tag
+
+ if isinstance(tag, hiero.core.Bin):
+ tag = get_tag(tagname, tag)
+
+ if tag is not None:
+ return tag
+
+ return None
+
+
+def add_metadata(metadata, hiero_item):
+ for key, value in metadata.get('Hiero', dict()).items():
+ if key == 'source_type':
+ # Only used internally to reassign tag to correct Hiero item
+ continue
+
+ if isinstance(value, dict):
+ add_metadata(value, hiero_item)
+ continue
+
+ if value is not None:
+ if not key.startswith('tag.'):
+ key = 'tag.' + key
+
+ hiero_item.metadata().setValue(key, str(value))
+
+
+def add_markers(otio_item, hiero_item, tagsbin):
+ if isinstance(otio_item, (otio.schema.Stack, otio.schema.Clip)):
+ markers = otio_item.markers
+
+ elif isinstance(otio_item, otio.schema.Timeline):
+ markers = otio_item.tracks.markers
+
+ else:
+ markers = []
+
+ for marker in markers:
+ meta = marker.metadata.get('Hiero', dict())
+ if 'source_type' in meta:
+ if hiero_item.__class__.__name__ != meta.get('source_type'):
+ continue
+
+ marker_color = marker.color
+
+ _tag = get_tag(marker.name, tagsbin)
+ if _tag is None:
+ _tag = get_tag(marker_color_map[marker_color], tagsbin)
+
+ if _tag is None:
+ _tag = hiero.core.Tag(marker_color_map[marker.color])
+
+ start = marker.marked_range.start_time.value
+ end = (
+ marker.marked_range.start_time.value +
+ marker.marked_range.duration.value
+ )
+
+ if hasattr(hiero_item, 'addTagToRange'):
+ tag = hiero_item.addTagToRange(_tag, start, end)
+
+ else:
+ tag = hiero_item.addTag(_tag)
+
+ tag.setName(marker.name or marker_color_map[marker_color])
+ # tag.setNote(meta.get('tag.note', ''))
+
+ # Add metadata
+ add_metadata(marker.metadata, tag)
+
+
+def create_track(otio_track, tracknum, track_kind):
+ if track_kind is None and hasattr(otio_track, 'kind'):
+ track_kind = otio_track.kind
+
+ # Create a Track
+ if track_kind == otio.schema.TrackKind.Video:
+ track = hiero.core.VideoTrack(
+ otio_track.name or 'Video{n}'.format(n=tracknum)
+ )
+
+ else:
+ track = hiero.core.AudioTrack(
+ otio_track.name or 'Audio{n}'.format(n=tracknum)
+ )
+
+ return track
+
+
+def create_clip(otio_clip, tagsbin, sequencebin):
+ # Create MediaSource
+ url = None
+ media = None
+ otio_media = otio_clip.media_reference
+
+ if isinstance(otio_media, otio.schema.ExternalReference):
+ url = prep_url(otio_media.target_url)
+ media = hiero.core.MediaSource(url)
+
+ elif not _otio_old:
+ if isinstance(otio_media, otio.schema.ImageSequenceReference):
+ url = prep_url(otio_media.abstract_target_url('#'))
+ media = hiero.core.MediaSource(url)
+
+ if media is None or media.isOffline():
+ media = create_offline_mediasource(otio_clip, url)
+
+ # Reuse previous clip if possible
+ clip = None
+ for item in sequencebin.clips():
+ if item.activeItem().mediaSource() == media:
+ clip = item.activeItem()
+ break
+
+ if not clip:
+ # Create new Clip
+ clip = hiero.core.Clip(media)
+
+ # Add Clip to a Bin
+ sequencebin.addItem(hiero.core.BinItem(clip))
+
+ # Add markers
+ add_markers(otio_clip, clip, tagsbin)
+
+ return clip
+
+
+def create_trackitem(playhead, track, otio_clip, clip):
+ source_range = otio_clip.source_range
+
+ trackitem = track.createTrackItem(otio_clip.name)
+ trackitem.setPlaybackSpeed(source_range.start_time.rate)
+ trackitem.setSource(clip)
+
+ time_scalar = 1.
+
+ # Check for speed effects and adjust playback speed accordingly
+ for effect in otio_clip.effects:
+ if isinstance(effect, otio.schema.LinearTimeWarp):
+ time_scalar = effect.time_scalar
+ # Only reverse effect can be applied here
+ if abs(time_scalar) == 1.:
+ trackitem.setPlaybackSpeed(
+ trackitem.playbackSpeed() * time_scalar)
+
+ elif isinstance(effect, otio.schema.FreezeFrame):
+ # For freeze frame, playback speed must be set after range
+ time_scalar = 0.
+
+ # If reverse playback speed swap source in and out
+ if trackitem.playbackSpeed() < 0:
+ source_out = source_range.start_time.value
+ source_in = source_range.end_time_inclusive().value
+
+ timeline_in = playhead + source_out
+ timeline_out = (
+ timeline_in +
+ source_range.duration.value
+ ) - 1
+ else:
+ # Normal playback speed
+ source_in = source_range.start_time.value
+ source_out = source_range.end_time_inclusive().value
+
+ timeline_in = playhead
+ timeline_out = (
+ timeline_in +
+ source_range.duration.value
+ ) - 1
+
+ # Set source and timeline in/out points
+ trackitem.setTimes(
+ timeline_in,
+ timeline_out,
+ source_in,
+ source_out
+
+ )
+
+ # Apply playback speed for freeze frames
+ if abs(time_scalar) != 1.:
+ trackitem.setPlaybackSpeed(trackitem.playbackSpeed() * time_scalar)
+
+ # Link audio to video when possible
+ if isinstance(track, hiero.core.AudioTrack):
+ for other in track.parent().trackItemsAt(playhead):
+ if other.source() == clip:
+ trackitem.link(other)
+
+ return trackitem
+
+
+def build_sequence(
+ otio_timeline, project=None, sequence=None, track_kind=None):
+ if project is None:
+ if sequence:
+ project = sequence.project()
+
+ else:
+ # Per version 12.1v2 there is no way of getting active project
+ project = hiero.core.projects(hiero.core.Project.kUserProjects)[-1]
+
+ projectbin = project.clipsBin()
+
+ if not sequence:
+ # Create a Sequence
+ sequence = hiero.core.Sequence(otio_timeline.name or 'OTIOSequence')
+
+ # Set sequence settings from otio timeline if available
+ if (
+ hasattr(otio_timeline, 'global_start_time')
+ and otio_timeline.global_start_time
+ ):
+ start_time = otio_timeline.global_start_time
+ sequence.setFramerate(start_time.rate)
+ sequence.setTimecodeStart(start_time.value)
+
+ # Create a Bin to hold clips
+ projectbin.addItem(hiero.core.BinItem(sequence))
+
+ sequencebin = hiero.core.Bin(sequence.name())
+ projectbin.addItem(sequencebin)
+
+ else:
+ sequencebin = projectbin
+
+ # Get tagsBin
+ tagsbin = hiero.core.project("Tag Presets").tagsBin()
+
+ # Add timeline markers
+ add_markers(otio_timeline, sequence, tagsbin)
+
+ if isinstance(otio_timeline, otio.schema.Timeline):
+ tracks = otio_timeline.tracks
+
+ else:
+ tracks = [otio_timeline]
+
+ for tracknum, otio_track in enumerate(tracks):
+ playhead = 0
+ _transitions = []
+
+ # Add track to sequence
+ track = create_track(otio_track, tracknum, track_kind)
+ sequence.addTrack(track)
+
+ # iterate over items in track
+ for _itemnum, otio_clip in enumerate(otio_track):
+ if isinstance(otio_clip, (otio.schema.Track, otio.schema.Stack)):
+ inform('Nested sequences/tracks are created separately.')
+
+ # Add gap where the nested sequence would have been
+ playhead += otio_clip.source_range.duration.value
+
+ # Process nested sequence
+ build_sequence(
+ otio_clip,
+ project=project,
+ track_kind=otio_track.kind
+ )
+
+ elif isinstance(otio_clip, otio.schema.Clip):
+ # Create a Clip
+ clip = create_clip(otio_clip, tagsbin, sequencebin)
+
+ # Create TrackItem
+ trackitem = create_trackitem(
+ playhead,
+ track,
+ otio_clip,
+ clip
+ )
+
+ # Add markers
+ add_markers(otio_clip, trackitem, tagsbin)
+
+ # Add trackitem to track
+ track.addTrackItem(trackitem)
+
+ # Update playhead
+ playhead = trackitem.timelineOut() + 1
+
+ elif isinstance(otio_clip, otio.schema.Transition):
+ # Store transitions for when all clips in the track are created
+ _transitions.append((otio_track, otio_clip))
+
+ elif isinstance(otio_clip, otio.schema.Gap):
+ # Hiero has no fillers, slugs or blanks at the moment
+ playhead += otio_clip.source_range.duration.value
+
+ # Apply transitions we stored earlier now that all clips are present
+ warnings = []
+ for otio_track, otio_item in _transitions:
+ # Catch warnings form transitions in case
+ # of unsupported transitions
+ warning = apply_transition(otio_track, otio_item, track)
+ if warning:
+ warnings.append(warning)
+
+ if warnings:
+ inform(warnings)
diff --git a/openpype/hosts/hiero/otio/utils.py b/openpype/hosts/hiero/otio/utils.py
new file mode 100644
index 0000000000..f882a5d1f2
--- /dev/null
+++ b/openpype/hosts/hiero/otio/utils.py
@@ -0,0 +1,76 @@
+import re
+import opentimelineio as otio
+
+
+def timecode_to_frames(timecode, framerate):
+ rt = otio.opentime.from_timecode(timecode, 24)
+ return int(otio.opentime.to_frames(rt))
+
+
+def frames_to_timecode(frames, framerate):
+ rt = otio.opentime.from_frames(frames, framerate)
+ return otio.opentime.to_timecode(rt)
+
+
+def frames_to_secons(frames, framerate):
+ rt = otio.opentime.from_frames(frames, framerate)
+ return otio.opentime.to_seconds(rt)
+
+
+def get_reformated_path(path, padded=True):
+ """
+ Return fixed python expression path
+
+ Args:
+ path (str): path url or simple file name
+
+ Returns:
+ type: string with reformated path
+
+ Example:
+ get_reformated_path("plate.[0001-1008].exr") > plate.%04d.exr
+
+ """
+ if "%" in path:
+ padding_pattern = r"(\d+)"
+ padding = int(re.findall(padding_pattern, path).pop())
+ num_pattern = r"(%\d+d)"
+ if padded:
+ path = re.sub(num_pattern, "%0{}d".format(padding), path)
+ else:
+ path = re.sub(num_pattern, "%d", path)
+ return path
+
+
+def get_padding_from_path(path):
+ """
+ Return padding number from DaVinci Resolve sequence path style
+
+ Args:
+ path (str): path url or simple file name
+
+ Returns:
+ int: padding number
+
+ Example:
+ get_padding_from_path("plate.[0001-1008].exr") > 4
+
+ """
+ padding_pattern = "(\\d+)(?=-)"
+ if "[" in path:
+ return len(re.findall(padding_pattern, path).pop())
+
+ return None
+
+
+def get_rate(item):
+ if not hasattr(item, 'framerate'):
+ return None
+
+ num, den = item.framerate().toRational()
+ rate = float(num) / float(den)
+
+ if rate.is_integer():
+ return rate
+
+ return round(rate, 4)
diff --git a/openpype/hosts/hiero/plugins/create/create_shot_clip.py b/openpype/hosts/hiero/plugins/create/create_shot_clip.py
index 07b7a62b2a..25be9f090b 100644
--- a/openpype/hosts/hiero/plugins/create/create_shot_clip.py
+++ b/openpype/hosts/hiero/plugins/create/create_shot_clip.py
@@ -120,9 +120,9 @@ class CreateShotClip(phiero.Creator):
"vSyncTrack": {
"value": gui_tracks, # noqa
"type": "QComboBox",
- "label": "Master track",
+ "label": "Hero track",
"target": "ui",
- "toolTip": "Select driving track name which should be mastering all others", # noqa
+ "toolTip": "Select driving track name which should be hero for all others", # noqa
"order": 1}
}
},
diff --git a/openpype/hosts/hiero/plugins/load/load_clip.py b/openpype/hosts/hiero/plugins/load/load_clip.py
index 4eadf28956..9e12fa360e 100644
--- a/openpype/hosts/hiero/plugins/load/load_clip.py
+++ b/openpype/hosts/hiero/plugins/load/load_clip.py
@@ -29,13 +29,19 @@ class LoadClip(phiero.SequenceLoader):
clip_color_last = "green"
clip_color = "red"
- def load(self, context, name, namespace, options):
+ clip_name_template = "{asset}_{subset}_{representation}"
+ def load(self, context, name, namespace, options):
+ # add clip name template to options
+ options.update({
+ "clipNameTemplate": self.clip_name_template
+ })
# in case loader uses multiselection
if self.track and self.sequence:
options.update({
"sequence": self.sequence,
- "track": self.track
+ "track": self.track,
+ "clipNameTemplate": self.clip_name_template
})
# load clip to timeline and get main variables
@@ -45,7 +51,8 @@ class LoadClip(phiero.SequenceLoader):
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
- object_name = "{}_{}".format(name, namespace)
+ object_name = self.clip_name_template.format(
+ **context["representation"]["context"])
# add additional metadata from the version to imprint Avalon knob
add_keys = [
diff --git a/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py b/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py
new file mode 100644
index 0000000000..d12e7665bf
--- /dev/null
+++ b/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py
@@ -0,0 +1,59 @@
+import os
+import pyblish.api
+import openpype.api
+
+
+class ExtractThumnail(openpype.api.Extractor):
+ """
+ Extractor for track item's tumnails
+ """
+
+ label = "Extract Thumnail"
+ order = pyblish.api.ExtractorOrder
+ families = ["plate", "take"]
+ hosts = ["hiero"]
+
+ def process(self, instance):
+ # create representation data
+ if "representations" not in instance.data:
+ instance.data["representations"] = []
+
+ staging_dir = self.staging_dir(instance)
+
+ self.create_thumbnail(staging_dir, instance)
+
+ def create_thumbnail(self, staging_dir, instance):
+ track_item = instance.data["item"]
+ track_item_name = track_item.name()
+
+ # frames
+ duration = track_item.sourceDuration()
+ frame_start = track_item.sourceIn()
+ self.log.debug(
+ "__ frame_start: `{}`, duration: `{}`".format(
+ frame_start, duration))
+
+ # get thumbnail frame from the middle
+ thumb_frame = int(frame_start + (duration / 2))
+
+ thumb_file = "{}thumbnail{}{}".format(
+ track_item_name, thumb_frame, ".png")
+ thumb_path = os.path.join(staging_dir, thumb_file)
+
+ thumbnail = track_item.thumbnail(thumb_frame).save(
+ thumb_path,
+ format='png'
+ )
+ self.log.debug(
+ "__ thumb_path: `{}`, frame: `{}`".format(thumbnail, thumb_frame))
+
+ self.log.info("Thumnail was generated to: {}".format(thumb_path))
+ thumb_representation = {
+ 'files': thumb_file,
+ 'stagingDir': staging_dir,
+ 'name': "thumbnail",
+ 'thumbnail': True,
+ 'ext': "png"
+ }
+ instance.data["representations"].append(
+ thumb_representation)
diff --git a/openpype/hosts/hiero/plugins/publish/version_up_workfile.py b/openpype/hosts/hiero/plugins/publish/integrate_version_up_workfile.py
similarity index 90%
rename from openpype/hosts/hiero/plugins/publish/version_up_workfile.py
rename to openpype/hosts/hiero/plugins/publish/integrate_version_up_workfile.py
index ae03513d78..934e7112fa 100644
--- a/openpype/hosts/hiero/plugins/publish/version_up_workfile.py
+++ b/openpype/hosts/hiero/plugins/publish/integrate_version_up_workfile.py
@@ -2,7 +2,7 @@ from pyblish import api
import openpype.api as pype
-class VersionUpWorkfile(api.ContextPlugin):
+class IntegrateVersionUpWorkfile(api.ContextPlugin):
"""Save as new workfile version"""
order = api.IntegratorOrder + 10.1
diff --git a/openpype/hosts/hiero/plugins/publish/precollect_instances.py b/openpype/hosts/hiero/plugins/publish/precollect_instances.py
index bdf007de06..a1dee711b7 100644
--- a/openpype/hosts/hiero/plugins/publish/precollect_instances.py
+++ b/openpype/hosts/hiero/plugins/publish/precollect_instances.py
@@ -1,221 +1,204 @@
-from compiler.ast import flatten
-from pyblish import api
+import pyblish
+import openpype
from openpype.hosts.hiero import api as phiero
-import hiero
-# from openpype.hosts.hiero.api import lib
-# reload(lib)
-# reload(phiero)
+from openpype.hosts.hiero.otio import hiero_export
+
+# # developer reload modules
+from pprint import pformat
-class PreCollectInstances(api.ContextPlugin):
+class PrecollectInstances(pyblish.api.ContextPlugin):
"""Collect all Track items selection."""
- order = api.CollectorOrder - 0.509
- label = "Pre-collect Instances"
+ order = pyblish.api.CollectorOrder - 0.59
+ label = "Precollect Instances"
hosts = ["hiero"]
def process(self, context):
- track_items = phiero.get_track_items(
- selected=True, check_tagged=True, check_enabled=True)
- # only return enabled track items
- if not track_items:
- track_items = phiero.get_track_items(
- check_enabled=True, check_tagged=True)
- # get sequence and video tracks
- sequence = context.data["activeSequence"]
- tracks = sequence.videoTracks()
-
- # add collection to context
- tracks_effect_items = self.collect_sub_track_items(tracks)
-
- context.data["tracksEffectItems"] = tracks_effect_items
-
+ otio_timeline = context.data["otioTimeline"]
+ selected_timeline_items = phiero.get_track_items(
+ selected=True, check_enabled=True, check_tagged=True)
self.log.info(
- "Processing enabled track items: {}".format(len(track_items)))
+ "Processing enabled track items: {}".format(
+ selected_timeline_items))
+
+ for track_item in selected_timeline_items:
- for _ti in track_items:
data = dict()
- clip = _ti.source()
+ clip_name = track_item.name()
- # get clips subtracks and anotations
- annotations = self.clip_annotations(clip)
- subtracks = self.clip_subtrack(_ti)
- self.log.debug("Annotations: {}".format(annotations))
- self.log.debug(">> Subtracks: {}".format(subtracks))
+ # get openpype tag data
+ tag_data = phiero.get_track_item_pype_data(track_item)
+ self.log.debug("__ tag_data: {}".format(pformat(tag_data)))
- # get pype tag data
- tag_parsed_data = phiero.get_track_item_pype_data(_ti)
- # self.log.debug(pformat(tag_parsed_data))
-
- if not tag_parsed_data:
+ if not tag_data:
continue
- if tag_parsed_data.get("id") != "pyblish.avalon.instance":
+ if tag_data.get("id") != "pyblish.avalon.instance":
continue
+
+ # solve handles length
+ tag_data["handleStart"] = min(
+ tag_data["handleStart"], int(track_item.handleInLength()))
+ tag_data["handleEnd"] = min(
+ tag_data["handleEnd"], int(track_item.handleOutLength()))
+
# add tag data to instance data
data.update({
- k: v for k, v in tag_parsed_data.items()
+ k: v for k, v in tag_data.items()
if k not in ("id", "applieswhole", "label")
})
- asset = tag_parsed_data["asset"]
- subset = tag_parsed_data["subset"]
- review = tag_parsed_data.get("review")
- audio = tag_parsed_data.get("audio")
-
- # remove audio attribute from data
- data.pop("audio")
+ asset = tag_data["asset"]
+ subset = tag_data["subset"]
# insert family into families
- family = tag_parsed_data["family"]
- families = [str(f) for f in tag_parsed_data["families"]]
+ family = tag_data["family"]
+ families = [str(f) for f in tag_data["families"]]
families.insert(0, str(family))
- track = _ti.parent()
- media_source = _ti.source().mediaSource()
- source_path = media_source.firstpath()
- file_head = media_source.filenameHead()
- file_info = media_source.fileinfos().pop()
- source_first_frame = int(file_info.startFrame())
-
- # apply only for feview and master track instance
- if review:
- families += ["review", "ftrack"]
+ # form label
+ label = asset
+ if asset != clip_name:
+ label += " ({})".format(clip_name)
+ label += " {}".format(subset)
+ label += " {}".format("[" + ", ".join(families) + "]")
data.update({
- "name": "{} {} {}".format(asset, subset, families),
+ "name": "{}_{}".format(asset, subset),
+ "label": label,
"asset": asset,
- "item": _ti,
+ "item": track_item,
"families": families,
-
- # tags
- "tags": _ti.tags(),
-
- # track item attributes
- "track": track.name(),
- "trackItem": track,
-
- # version data
- "versionData": {
- "colorspace": _ti.sourceMediaColourTransform()
- },
-
- # source attribute
- "source": source_path,
- "sourceMedia": media_source,
- "sourcePath": source_path,
- "sourceFileHead": file_head,
- "sourceFirst": source_first_frame,
-
- # clip's effect
- "clipEffectItems": subtracks
+ "publish": tag_data["publish"],
+ "fps": context.data["fps"]
})
+ # otio clip data
+ otio_data = self.get_otio_clip_instance_data(
+ otio_timeline, track_item) or {}
+ self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
+ data.update(otio_data)
+ self.log.debug("__ data: {}".format(pformat(data)))
+
+ # add resolution
+ self.get_resolution_to_data(data, context)
+
+ # create instance
instance = context.create_instance(**data)
+ # create shot instance for shot attributes create/update
+ self.create_shot_instance(context, **data)
+
self.log.info("Creating instance: {}".format(instance))
+ self.log.debug(
+ "_ instance.data: {}".format(pformat(instance.data)))
- if audio:
- a_data = dict()
+ def get_resolution_to_data(self, data, context):
+ assert data.get("otioClip"), "Missing `otioClip` data"
- # add tag data to instance data
- a_data.update({
- k: v for k, v in tag_parsed_data.items()
- if k not in ("id", "applieswhole", "label")
- })
+ # solve source resolution option
+ if data.get("sourceResolution", None):
+ otio_clip_metadata = data[
+ "otioClip"].media_reference.metadata
+ data.update({
+ "resolutionWidth": otio_clip_metadata[
+ "openpype.source.width"],
+ "resolutionHeight": otio_clip_metadata[
+ "openpype.source.height"],
+ "pixelAspect": otio_clip_metadata[
+ "openpype.source.pixelAspect"]
+ })
+ else:
+ otio_tl_metadata = context.data["otioTimeline"].metadata
+ data.update({
+ "resolutionWidth": otio_tl_metadata["openpype.timeline.width"],
+ "resolutionHeight": otio_tl_metadata[
+ "openpype.timeline.height"],
+ "pixelAspect": otio_tl_metadata[
+ "openpype.timeline.pixelAspect"]
+ })
- # create main attributes
- subset = "audioMain"
- family = "audio"
- families = ["clip", "ftrack"]
- families.insert(0, str(family))
+ def create_shot_instance(self, context, **data):
+ master_layer = data.get("heroTrack")
+ hierarchy_data = data.get("hierarchyData")
+ asset = data.get("asset")
+ item = data.get("item")
+ clip_name = item.name()
- name = "{} {} {}".format(asset, subset, families)
+ if not master_layer:
+ return
- a_data.update({
- "name": name,
- "subset": subset,
- "asset": asset,
- "family": family,
- "families": families,
- "item": _ti,
+ if not hierarchy_data:
+ return
- # tags
- "tags": _ti.tags(),
- })
+ asset = data["asset"]
+ subset = "shotMain"
- a_instance = context.create_instance(**a_data)
- self.log.info("Creating audio instance: {}".format(a_instance))
+ # insert family into families
+ family = "shot"
+
+ # form label
+ label = asset
+ if asset != clip_name:
+ label += " ({}) ".format(clip_name)
+ label += " {}".format(subset)
+ label += " [{}]".format(family)
+
+ data.update({
+ "name": "{}_{}".format(asset, subset),
+ "label": label,
+ "subset": subset,
+ "asset": asset,
+ "family": family,
+ "families": []
+ })
+
+ instance = context.create_instance(**data)
+ self.log.info("Creating instance: {}".format(instance))
+ self.log.debug(
+ "_ instance.data: {}".format(pformat(instance.data)))
+
+ def get_otio_clip_instance_data(self, otio_timeline, track_item):
+ """
+ Return otio objects for timeline, track and clip
+
+ Args:
+ timeline_item_data (dict): timeline_item_data from list returned by
+ resolve.get_current_timeline_items()
+ otio_timeline (otio.schema.Timeline): otio object
+
+ Returns:
+ dict: otio clip object
+
+ """
+ ti_track_name = track_item.parent().name()
+ timeline_range = self.create_otio_time_range_from_timeline_item_data(
+ track_item)
+ for otio_clip in otio_timeline.each_clip():
+ track_name = otio_clip.parent().name
+ parent_range = otio_clip.range_in_parent()
+ if ti_track_name not in track_name:
+ continue
+ if otio_clip.name not in track_item.name():
+ continue
+ if openpype.lib.is_overlapping_otio_ranges(
+ parent_range, timeline_range, strict=True):
+
+ # add pypedata marker to otio_clip metadata
+ for marker in otio_clip.markers:
+ if phiero.pype_tag_name in marker.name:
+ otio_clip.metadata.update(marker.metadata)
+ return {"otioClip": otio_clip}
+
+ return None
@staticmethod
- def clip_annotations(clip):
- """
- Returns list of Clip's hiero.core.Annotation
- """
- annotations = []
- subTrackItems = flatten(clip.subTrackItems())
- annotations += [item for item in subTrackItems if isinstance(
- item, hiero.core.Annotation)]
- return annotations
+ def create_otio_time_range_from_timeline_item_data(track_item):
+ timeline = phiero.get_current_sequence()
+ frame_start = int(track_item.timelineIn())
+ frame_duration = int(track_item.sourceDuration())
+ fps = timeline.framerate().toFloat()
- @staticmethod
- def clip_subtrack(clip):
- """
- Returns list of Clip's hiero.core.SubTrackItem
- """
- subtracks = []
- subTrackItems = flatten(clip.parent().subTrackItems())
- for item in subTrackItems:
- # avoid all anotation
- if isinstance(item, hiero.core.Annotation):
- continue
- # # avoid all not anaibled
- if not item.isEnabled():
- continue
- subtracks.append(item)
- return subtracks
-
- @staticmethod
- def collect_sub_track_items(tracks):
- """
- Returns dictionary with track index as key and list of subtracks
- """
- # collect all subtrack items
- sub_track_items = dict()
- for track in tracks:
- items = track.items()
-
- # skip if no clips on track > need track with effect only
- if items:
- continue
-
- # skip all disabled tracks
- if not track.isEnabled():
- continue
-
- track_index = track.trackIndex()
- _sub_track_items = flatten(track.subTrackItems())
-
- # continue only if any subtrack items are collected
- if len(_sub_track_items) < 1:
- continue
-
- enabled_sti = list()
- # loop all found subtrack items and check if they are enabled
- for _sti in _sub_track_items:
- # checking if not enabled
- if not _sti.isEnabled():
- continue
- if isinstance(_sti, hiero.core.Annotation):
- continue
- # collect the subtrack item
- enabled_sti.append(_sti)
-
- # continue only if any subtrack items are collected
- if len(enabled_sti) < 1:
- continue
-
- # add collection of subtrackitems to dict
- sub_track_items[track_index] = enabled_sti
-
- return sub_track_items
+ return hiero_export.create_otio_time_range(
+ frame_start, frame_duration, fps)
diff --git a/openpype/hosts/hiero/plugins/publish/precollect_workfile.py b/openpype/hosts/hiero/plugins/publish/precollect_workfile.py
index ef7d07421b..bc4ef7e150 100644
--- a/openpype/hosts/hiero/plugins/publish/precollect_workfile.py
+++ b/openpype/hosts/hiero/plugins/publish/precollect_workfile.py
@@ -1,52 +1,57 @@
import os
import pyblish.api
+import hiero.ui
from openpype.hosts.hiero import api as phiero
from avalon import api as avalon
+from pprint import pformat
+from openpype.hosts.hiero.otio import hiero_export
+from Qt.QtGui import QPixmap
+import tempfile
-
-class PreCollectWorkfile(pyblish.api.ContextPlugin):
+class PrecollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
- label = "Pre-collect Workfile"
- order = pyblish.api.CollectorOrder - 0.51
+ label = "Precollect Workfile"
+ order = pyblish.api.CollectorOrder - 0.6
def process(self, context):
+
asset = avalon.Session["AVALON_ASSET"]
subset = "workfile"
-
project = phiero.get_current_project()
- active_sequence = phiero.get_current_sequence()
- video_tracks = active_sequence.videoTracks()
- audio_tracks = active_sequence.audioTracks()
- current_file = project.path()
- staging_dir = os.path.dirname(current_file)
- base_name = os.path.basename(current_file)
+ active_timeline = hiero.ui.activeSequence()
+ fps = active_timeline.framerate().toFloat()
- # get workfile's colorspace properties
- _clrs = {}
- _clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
- _clrs["lutSetting16Bit"] = project.lutSetting16Bit()
- _clrs["lutSetting8Bit"] = project.lutSetting8Bit()
- _clrs["lutSettingFloat"] = project.lutSettingFloat()
- _clrs["lutSettingLog"] = project.lutSettingLog()
- _clrs["lutSettingViewer"] = project.lutSettingViewer()
- _clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
- _clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
- _clrs["ocioConfigName"] = project.ocioConfigName()
- _clrs["ocioConfigPath"] = project.ocioConfigPath()
+ # adding otio timeline to context
+ otio_timeline = hiero_export.create_otio_timeline()
- # set main project attributes to context
- context.data["activeProject"] = project
- context.data["activeSequence"] = active_sequence
- context.data["videoTracks"] = video_tracks
- context.data["audioTracks"] = audio_tracks
- context.data["currentFile"] = current_file
- context.data["colorspace"] = _clrs
+ # get workfile thumnail paths
+ tmp_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
+ thumbnail_name = "workfile_thumbnail.png"
+ thumbnail_path = os.path.join(tmp_staging, thumbnail_name)
- self.log.info("currentFile: {}".format(current_file))
+ # search for all windows with name of actual sequence
+ _windows = [w for w in hiero.ui.windowManager().windows()
+ if active_timeline.name() in w.windowTitle()]
+
+ # export window to thumb path
+ QPixmap.grabWidget(_windows[-1]).save(thumbnail_path, 'png')
+
+ # thumbnail
+ thumb_representation = {
+ 'files': thumbnail_name,
+ 'stagingDir': tmp_staging,
+ 'name': "thumbnail",
+ 'thumbnail': True,
+ 'ext': "png"
+ }
+
+ # get workfile paths
+ curent_file = project.path()
+ staging_dir, base_name = os.path.split(curent_file)
# creating workfile representation
- representation = {
+ workfile_representation = {
'name': 'hrox',
'ext': 'hrox',
'files': base_name,
@@ -59,16 +64,21 @@ class PreCollectWorkfile(pyblish.api.ContextPlugin):
"subset": "{}{}".format(asset, subset.capitalize()),
"item": project,
"family": "workfile",
-
- # version data
- "versionData": {
- "colorspace": _clrs
- },
-
- # source attribute
- "sourcePath": current_file,
- "representations": [representation]
+ "representations": [workfile_representation, thumb_representation]
}
+ # create instance with workfile
instance = context.create_instance(**instance_data)
+
+ # update context with main project attributes
+ context_data = {
+ "activeProject": project,
+ "otioTimeline": otio_timeline,
+ "currentFile": curent_file,
+ "fps": fps,
+ }
+ context.data.update(context_data)
+
self.log.info("Creating instance: {}".format(instance))
+ self.log.debug("__ instance.data: {}".format(pformat(instance.data)))
+ self.log.debug("__ context_data: {}".format(pformat(context_data)))
diff --git a/openpype/hosts/hiero/plugins/publish/collect_assetbuilds.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/collect_assetbuilds.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py
diff --git a/openpype/hosts/hiero/plugins/publish/collect_clip_resolution.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_clip_resolution.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/collect_clip_resolution.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_clip_resolution.py
diff --git a/openpype/hosts/hiero/plugins/publish/collect_frame_ranges.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_frame_ranges.py
similarity index 97%
rename from openpype/hosts/hiero/plugins/publish/collect_frame_ranges.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_frame_ranges.py
index 39387578d2..21e12e89fa 100644
--- a/openpype/hosts/hiero/plugins/publish/collect_frame_ranges.py
+++ b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_frame_ranges.py
@@ -5,7 +5,7 @@ class CollectFrameRanges(pyblish.api.InstancePlugin):
""" Collect all framranges.
"""
- order = pyblish.api.CollectorOrder
+ order = pyblish.api.CollectorOrder - 0.1
label = "Collect Frame Ranges"
hosts = ["hiero"]
families = ["clip", "effect"]
diff --git a/openpype/hosts/hiero/plugins/publish/collect_hierarchy_context.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_hierarchy_context.py
similarity index 97%
rename from openpype/hosts/hiero/plugins/publish/collect_hierarchy_context.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_hierarchy_context.py
index ba3e388c53..0696a58e39 100644
--- a/openpype/hosts/hiero/plugins/publish/collect_hierarchy_context.py
+++ b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_hierarchy_context.py
@@ -39,8 +39,8 @@ class CollectHierarchy(pyblish.api.ContextPlugin):
if not set(self.families).intersection(families):
continue
- # exclude if not masterLayer True
- if not instance.data.get("masterLayer"):
+ # exclude if not heroTrack True
+ if not instance.data.get("heroTrack"):
continue
# update families to include `shot` for hierarchy integration
diff --git a/openpype/hosts/hiero/plugins/publish/collect_host_version.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_host_version.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/collect_host_version.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_host_version.py
diff --git a/openpype/hosts/hiero/plugins/publish/collect_plates.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_plates.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/collect_plates.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_plates.py
diff --git a/openpype/hosts/hiero/plugins/publish/collect_review.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_review.py
similarity index 99%
rename from openpype/hosts/hiero/plugins/publish/collect_review.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_review.py
index a0ab00b355..b1d97a71d7 100644
--- a/openpype/hosts/hiero/plugins/publish/collect_review.py
+++ b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_review.py
@@ -29,7 +29,7 @@ class CollectReview(api.InstancePlugin):
Exception: description
"""
- review_track = instance.data.get("review")
+ review_track = instance.data.get("reviewTrack")
video_tracks = instance.context.data["videoTracks"]
for track in video_tracks:
if review_track not in track.name():
diff --git a/openpype/hosts/hiero/plugins/publish/collect_tag_tasks.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_tag_tasks.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/collect_tag_tasks.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/collect_tag_tasks.py
diff --git a/openpype/hosts/hiero/plugins/publish/extract_audio.py b/openpype/hosts/hiero/plugins/publish_old_workflow/extract_audio.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/extract_audio.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/extract_audio.py
diff --git a/openpype/hosts/hiero/plugins/publish/extract_clip_effects.py b/openpype/hosts/hiero/plugins/publish_old_workflow/extract_clip_effects.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/extract_clip_effects.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/extract_clip_effects.py
diff --git a/openpype/hosts/hiero/plugins/publish/extract_review_preparation.py b/openpype/hosts/hiero/plugins/publish_old_workflow/extract_review_preparation.py
similarity index 98%
rename from openpype/hosts/hiero/plugins/publish/extract_review_preparation.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/extract_review_preparation.py
index 5456ddc3c4..aac476e27a 100644
--- a/openpype/hosts/hiero/plugins/publish/extract_review_preparation.py
+++ b/openpype/hosts/hiero/plugins/publish_old_workflow/extract_review_preparation.py
@@ -132,7 +132,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
).format(**locals())
self.log.debug("ffprob_cmd: {}".format(ffprob_cmd))
- audio_check_output = openpype.api.subprocess(ffprob_cmd)
+ audio_check_output = openpype.api.run_subprocess(ffprob_cmd)
self.log.debug(
"audio_check_output: {}".format(audio_check_output))
@@ -167,7 +167,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
# try to get video native resolution data
try:
- resolution_output = openpype.api.subprocess((
+ resolution_output = openpype.api.run_subprocess((
"\"{ffprobe_path}\" -i \"{full_input_path}\""
" -v error "
"-select_streams v:0 -show_entries "
@@ -280,7 +280,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
# run subprocess
self.log.debug("Executing: {}".format(subprcs_cmd))
- output = openpype.api.subprocess(subprcs_cmd)
+ output = openpype.api.run_subprocess(subprcs_cmd)
self.log.debug("Output: {}".format(output))
repre_new = {
diff --git a/openpype/hosts/hiero/plugins/publish/precollect_clip_effects.py b/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_clip_effects.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/precollect_clip_effects.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/precollect_clip_effects.py
diff --git a/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_instances.py b/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_instances.py
new file mode 100644
index 0000000000..f9cc158e79
--- /dev/null
+++ b/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_instances.py
@@ -0,0 +1,223 @@
+from compiler.ast import flatten
+from pyblish import api
+from openpype.hosts.hiero import api as phiero
+import hiero
+# from openpype.hosts.hiero.api import lib
+# reload(lib)
+# reload(phiero)
+
+
+class PreCollectInstances(api.ContextPlugin):
+ """Collect all Track items selection."""
+
+ order = api.CollectorOrder - 0.509
+ label = "Pre-collect Instances"
+ hosts = ["hiero"]
+
+ def process(self, context):
+ track_items = phiero.get_track_items(
+ selected=True, check_tagged=True, check_enabled=True)
+ # only return enabled track items
+ if not track_items:
+ track_items = phiero.get_track_items(
+ check_enabled=True, check_tagged=True)
+ # get sequence and video tracks
+ sequence = context.data["activeSequence"]
+ tracks = sequence.videoTracks()
+
+ # add collection to context
+ tracks_effect_items = self.collect_sub_track_items(tracks)
+
+ context.data["tracksEffectItems"] = tracks_effect_items
+
+ self.log.info(
+ "Processing enabled track items: {}".format(len(track_items)))
+
+ for _ti in track_items:
+ data = {}
+ clip = _ti.source()
+
+ # get clips subtracks and anotations
+ annotations = self.clip_annotations(clip)
+ subtracks = self.clip_subtrack(_ti)
+ self.log.debug("Annotations: {}".format(annotations))
+ self.log.debug(">> Subtracks: {}".format(subtracks))
+
+ # get pype tag data
+ tag_parsed_data = phiero.get_track_item_pype_data(_ti)
+ # self.log.debug(pformat(tag_parsed_data))
+
+ if not tag_parsed_data:
+ continue
+
+ if tag_parsed_data.get("id") != "pyblish.avalon.instance":
+ continue
+ # add tag data to instance data
+ data.update({
+ k: v for k, v in tag_parsed_data.items()
+ if k not in ("id", "applieswhole", "label")
+ })
+
+ asset = tag_parsed_data["asset"]
+ subset = tag_parsed_data["subset"]
+ review_track = tag_parsed_data.get("reviewTrack")
+ hiero_track = tag_parsed_data.get("heroTrack")
+ audio = tag_parsed_data.get("audio")
+
+ # remove audio attribute from data
+ data.pop("audio")
+
+ # insert family into families
+ family = tag_parsed_data["family"]
+ families = [str(f) for f in tag_parsed_data["families"]]
+ families.insert(0, str(family))
+
+ track = _ti.parent()
+ media_source = _ti.source().mediaSource()
+ source_path = media_source.firstpath()
+ file_head = media_source.filenameHead()
+ file_info = media_source.fileinfos().pop()
+ source_first_frame = int(file_info.startFrame())
+
+ # apply only for review and master track instance
+ if review_track and hiero_track:
+ families += ["review", "ftrack"]
+
+ data.update({
+ "name": "{} {} {}".format(asset, subset, families),
+ "asset": asset,
+ "item": _ti,
+ "families": families,
+
+ # tags
+ "tags": _ti.tags(),
+
+ # track item attributes
+ "track": track.name(),
+ "trackItem": track,
+ "reviewTrack": review_track,
+
+ # version data
+ "versionData": {
+ "colorspace": _ti.sourceMediaColourTransform()
+ },
+
+ # source attribute
+ "source": source_path,
+ "sourceMedia": media_source,
+ "sourcePath": source_path,
+ "sourceFileHead": file_head,
+ "sourceFirst": source_first_frame,
+
+ # clip's effect
+ "clipEffectItems": subtracks
+ })
+
+ instance = context.create_instance(**data)
+
+ self.log.info("Creating instance.data: {}".format(instance.data))
+
+ if audio:
+ a_data = dict()
+
+ # add tag data to instance data
+ a_data.update({
+ k: v for k, v in tag_parsed_data.items()
+ if k not in ("id", "applieswhole", "label")
+ })
+
+ # create main attributes
+ subset = "audioMain"
+ family = "audio"
+ families = ["clip", "ftrack"]
+ families.insert(0, str(family))
+
+ name = "{} {} {}".format(asset, subset, families)
+
+ a_data.update({
+ "name": name,
+ "subset": subset,
+ "asset": asset,
+ "family": family,
+ "families": families,
+ "item": _ti,
+
+ # tags
+ "tags": _ti.tags(),
+ })
+
+ a_instance = context.create_instance(**a_data)
+ self.log.info("Creating audio instance: {}".format(a_instance))
+
+ @staticmethod
+ def clip_annotations(clip):
+ """
+ Returns list of Clip's hiero.core.Annotation
+ """
+ annotations = []
+ subTrackItems = flatten(clip.subTrackItems())
+ annotations += [item for item in subTrackItems if isinstance(
+ item, hiero.core.Annotation)]
+ return annotations
+
+ @staticmethod
+ def clip_subtrack(clip):
+ """
+ Returns list of Clip's hiero.core.SubTrackItem
+ """
+ subtracks = []
+ subTrackItems = flatten(clip.parent().subTrackItems())
+ for item in subTrackItems:
+ # avoid all anotation
+ if isinstance(item, hiero.core.Annotation):
+ continue
+ # # avoid all not anaibled
+ if not item.isEnabled():
+ continue
+ subtracks.append(item)
+ return subtracks
+
+ @staticmethod
+ def collect_sub_track_items(tracks):
+ """
+ Returns dictionary with track index as key and list of subtracks
+ """
+ # collect all subtrack items
+ sub_track_items = dict()
+ for track in tracks:
+ items = track.items()
+
+ # skip if no clips on track > need track with effect only
+ if items:
+ continue
+
+ # skip all disabled tracks
+ if not track.isEnabled():
+ continue
+
+ track_index = track.trackIndex()
+ _sub_track_items = flatten(track.subTrackItems())
+
+ # continue only if any subtrack items are collected
+ if len(_sub_track_items) < 1:
+ continue
+
+ enabled_sti = list()
+ # loop all found subtrack items and check if they are enabled
+ for _sti in _sub_track_items:
+ # checking if not enabled
+ if not _sti.isEnabled():
+ continue
+ if isinstance(_sti, hiero.core.Annotation):
+ continue
+ # collect the subtrack item
+ enabled_sti.append(_sti)
+
+ # continue only if any subtrack items are collected
+ if len(enabled_sti) < 1:
+ continue
+
+ # add collection of subtrackitems to dict
+ sub_track_items[track_index] = enabled_sti
+
+ return sub_track_items
diff --git a/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_workfile.py b/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_workfile.py
new file mode 100644
index 0000000000..ef7d07421b
--- /dev/null
+++ b/openpype/hosts/hiero/plugins/publish_old_workflow/precollect_workfile.py
@@ -0,0 +1,74 @@
+import os
+import pyblish.api
+from openpype.hosts.hiero import api as phiero
+from avalon import api as avalon
+
+
+class PreCollectWorkfile(pyblish.api.ContextPlugin):
+ """Inject the current working file into context"""
+
+ label = "Pre-collect Workfile"
+ order = pyblish.api.CollectorOrder - 0.51
+
+ def process(self, context):
+ asset = avalon.Session["AVALON_ASSET"]
+ subset = "workfile"
+
+ project = phiero.get_current_project()
+ active_sequence = phiero.get_current_sequence()
+ video_tracks = active_sequence.videoTracks()
+ audio_tracks = active_sequence.audioTracks()
+ current_file = project.path()
+ staging_dir = os.path.dirname(current_file)
+ base_name = os.path.basename(current_file)
+
+ # get workfile's colorspace properties
+ _clrs = {}
+ _clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
+ _clrs["lutSetting16Bit"] = project.lutSetting16Bit()
+ _clrs["lutSetting8Bit"] = project.lutSetting8Bit()
+ _clrs["lutSettingFloat"] = project.lutSettingFloat()
+ _clrs["lutSettingLog"] = project.lutSettingLog()
+ _clrs["lutSettingViewer"] = project.lutSettingViewer()
+ _clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
+ _clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
+ _clrs["ocioConfigName"] = project.ocioConfigName()
+ _clrs["ocioConfigPath"] = project.ocioConfigPath()
+
+ # set main project attributes to context
+ context.data["activeProject"] = project
+ context.data["activeSequence"] = active_sequence
+ context.data["videoTracks"] = video_tracks
+ context.data["audioTracks"] = audio_tracks
+ context.data["currentFile"] = current_file
+ context.data["colorspace"] = _clrs
+
+ self.log.info("currentFile: {}".format(current_file))
+
+ # creating workfile representation
+ representation = {
+ 'name': 'hrox',
+ 'ext': 'hrox',
+ 'files': base_name,
+ "stagingDir": staging_dir,
+ }
+
+ instance_data = {
+ "name": "{}_{}".format(asset, subset),
+ "asset": asset,
+ "subset": "{}{}".format(asset, subset.capitalize()),
+ "item": project,
+ "family": "workfile",
+
+ # version data
+ "versionData": {
+ "colorspace": _clrs
+ },
+
+ # source attribute
+ "sourcePath": current_file,
+ "representations": [representation]
+ }
+
+ instance = context.create_instance(**instance_data)
+ self.log.info("Creating instance: {}".format(instance))
diff --git a/openpype/hosts/hiero/plugins/publish/validate_audio.py b/openpype/hosts/hiero/plugins/publish_old_workflow/validate_audio.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/validate_audio.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/validate_audio.py
diff --git a/openpype/hosts/hiero/plugins/publish/validate_hierarchy.py b/openpype/hosts/hiero/plugins/publish_old_workflow/validate_hierarchy.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/validate_hierarchy.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/validate_hierarchy.py
diff --git a/openpype/hosts/hiero/plugins/publish/validate_names.py b/openpype/hosts/hiero/plugins/publish_old_workflow/validate_names.py
similarity index 100%
rename from openpype/hosts/hiero/plugins/publish/validate_names.py
rename to openpype/hosts/hiero/plugins/publish_old_workflow/validate_names.py
diff --git a/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportTask.py b/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportTask.py
index 90504ccd18..7e1a8df2dc 100644
--- a/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportTask.py
+++ b/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportTask.py
@@ -1,338 +1,28 @@
-# MIT License
-#
-# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+__author__ = "Daniel Flehner Heen"
+__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
import os
-import re
import hiero.core
from hiero.core import util
import opentimelineio as otio
-
-
-marker_color_map = {
- "magenta": otio.schema.MarkerColor.MAGENTA,
- "red": otio.schema.MarkerColor.RED,
- "yellow": otio.schema.MarkerColor.YELLOW,
- "green": otio.schema.MarkerColor.GREEN,
- "cyan": otio.schema.MarkerColor.CYAN,
- "blue": otio.schema.MarkerColor.BLUE,
-}
-
+from openpype.hosts.hiero.otio import hiero_export
class OTIOExportTask(hiero.core.TaskBase):
def __init__(self, initDict):
"""Initialize"""
hiero.core.TaskBase.__init__(self, initDict)
+ self.otio_timeline = None
def name(self):
return str(type(self))
- def get_rate(self, item):
- if not hasattr(item, 'framerate'):
- item = item.sequence()
-
- num, den = item.framerate().toRational()
- rate = float(num) / float(den)
-
- if rate.is_integer():
- return rate
-
- return round(rate, 2)
-
- def get_clip_ranges(self, trackitem):
- # Get rate from source or sequence
- if trackitem.source().mediaSource().hasVideo():
- rate_item = trackitem.source()
-
- else:
- rate_item = trackitem.sequence()
-
- source_rate = self.get_rate(rate_item)
-
- # Reversed video/audio
- if trackitem.playbackSpeed() < 0:
- start = trackitem.sourceOut()
-
- else:
- start = trackitem.sourceIn()
-
- source_start_time = otio.opentime.RationalTime(
- start,
- source_rate
- )
- source_duration = otio.opentime.RationalTime(
- trackitem.duration(),
- source_rate
- )
-
- source_range = otio.opentime.TimeRange(
- start_time=source_start_time,
- duration=source_duration
- )
-
- hiero_clip = trackitem.source()
-
- available_range = None
- if hiero_clip.mediaSource().isMediaPresent():
- start_time = otio.opentime.RationalTime(
- hiero_clip.mediaSource().startTime(),
- source_rate
- )
- duration = otio.opentime.RationalTime(
- hiero_clip.mediaSource().duration(),
- source_rate
- )
- available_range = otio.opentime.TimeRange(
- start_time=start_time,
- duration=duration
- )
-
- return source_range, available_range
-
- def add_gap(self, trackitem, otio_track, prev_out):
- gap_length = trackitem.timelineIn() - prev_out
- if prev_out != 0:
- gap_length -= 1
-
- rate = self.get_rate(trackitem.sequence())
- gap = otio.opentime.TimeRange(
- duration=otio.opentime.RationalTime(
- gap_length,
- rate
- )
- )
- otio_gap = otio.schema.Gap(source_range=gap)
- otio_track.append(otio_gap)
-
- def get_marker_color(self, tag):
- icon = tag.icon()
- pat = r'icons:Tag(?P\w+)\.\w+'
-
- res = re.search(pat, icon)
- if res:
- color = res.groupdict().get('color')
- if color.lower() in marker_color_map:
- return marker_color_map[color.lower()]
-
- return otio.schema.MarkerColor.RED
-
- def add_markers(self, hiero_item, otio_item):
- for tag in hiero_item.tags():
- if not tag.visible():
- continue
-
- if tag.name() == 'Copy':
- # Hiero adds this tag to a lot of clips
- continue
-
- frame_rate = self.get_rate(hiero_item)
-
- marked_range = otio.opentime.TimeRange(
- start_time=otio.opentime.RationalTime(
- tag.inTime(),
- frame_rate
- ),
- duration=otio.opentime.RationalTime(
- int(tag.metadata().dict().get('tag.length', '0')),
- frame_rate
- )
- )
-
- metadata = dict(
- Hiero=tag.metadata().dict()
- )
- # Store the source item for future import assignment
- metadata['Hiero']['source_type'] = hiero_item.__class__.__name__
-
- marker = otio.schema.Marker(
- name=tag.name(),
- color=self.get_marker_color(tag),
- marked_range=marked_range,
- metadata=metadata
- )
-
- otio_item.markers.append(marker)
-
- def add_clip(self, trackitem, otio_track, itemindex):
- hiero_clip = trackitem.source()
-
- # Add Gap if needed
- if itemindex == 0:
- prev_item = trackitem
-
- else:
- prev_item = trackitem.parent().items()[itemindex - 1]
-
- clip_diff = trackitem.timelineIn() - prev_item.timelineOut()
-
- if itemindex == 0 and trackitem.timelineIn() > 0:
- self.add_gap(trackitem, otio_track, 0)
-
- elif itemindex and clip_diff != 1:
- self.add_gap(trackitem, otio_track, prev_item.timelineOut())
-
- # Create Clip
- source_range, available_range = self.get_clip_ranges(trackitem)
-
- otio_clip = otio.schema.Clip(
- name=trackitem.name(),
- source_range=source_range
- )
-
- # Add media reference
- media_reference = otio.schema.MissingReference()
- if hiero_clip.mediaSource().isMediaPresent():
- source = hiero_clip.mediaSource()
- first_file = source.fileinfos()[0]
- path = first_file.filename()
-
- if "%" in path:
- path = re.sub(r"%\d+d", "%d", path)
- if "#" in path:
- path = re.sub(r"#+", "%d", path)
-
- media_reference = otio.schema.ExternalReference(
- target_url=u'{}'.format(path),
- available_range=available_range
- )
-
- otio_clip.media_reference = media_reference
-
- # Add Time Effects
- playbackspeed = trackitem.playbackSpeed()
- if playbackspeed != 1:
- if playbackspeed == 0:
- time_effect = otio.schema.FreezeFrame()
-
- else:
- time_effect = otio.schema.LinearTimeWarp(
- time_scalar=playbackspeed
- )
- otio_clip.effects.append(time_effect)
-
- # Add tags as markers
- if self._preset.properties()["includeTags"]:
- self.add_markers(trackitem, otio_clip)
- self.add_markers(trackitem.source(), otio_clip)
-
- otio_track.append(otio_clip)
-
- # Add Transition if needed
- if trackitem.inTransition() or trackitem.outTransition():
- self.add_transition(trackitem, otio_track)
-
- def add_transition(self, trackitem, otio_track):
- transitions = []
-
- if trackitem.inTransition():
- if trackitem.inTransition().alignment().name == 'kFadeIn':
- transitions.append(trackitem.inTransition())
-
- if trackitem.outTransition():
- transitions.append(trackitem.outTransition())
-
- for transition in transitions:
- alignment = transition.alignment().name
-
- if alignment == 'kFadeIn':
- in_offset_frames = 0
- out_offset_frames = (
- transition.timelineOut() - transition.timelineIn()
- ) + 1
-
- elif alignment == 'kFadeOut':
- in_offset_frames = (
- trackitem.timelineOut() - transition.timelineIn()
- ) + 1
- out_offset_frames = 0
-
- elif alignment == 'kDissolve':
- in_offset_frames = (
- transition.inTrackItem().timelineOut() -
- transition.timelineIn()
- )
- out_offset_frames = (
- transition.timelineOut() -
- transition.outTrackItem().timelineIn()
- )
-
- else:
- # kUnknown transition is ignored
- continue
-
- rate = trackitem.source().framerate().toFloat()
- in_time = otio.opentime.RationalTime(in_offset_frames, rate)
- out_time = otio.opentime.RationalTime(out_offset_frames, rate)
-
- otio_transition = otio.schema.Transition(
- name=alignment, # Consider placing Hiero name in metadata
- transition_type=otio.schema.TransitionTypes.SMPTE_Dissolve,
- in_offset=in_time,
- out_offset=out_time
- )
-
- if alignment == 'kFadeIn':
- otio_track.insert(-1, otio_transition)
-
- else:
- otio_track.append(otio_transition)
-
-
- def add_tracks(self):
- for track in self._sequence.items():
- if isinstance(track, hiero.core.AudioTrack):
- kind = otio.schema.TrackKind.Audio
-
- else:
- kind = otio.schema.TrackKind.Video
-
- otio_track = otio.schema.Track(name=track.name(), kind=kind)
-
- for itemindex, trackitem in enumerate(track):
- if isinstance(trackitem.source(), hiero.core.Clip):
- self.add_clip(trackitem, otio_track, itemindex)
-
- self.otio_timeline.tracks.append(otio_track)
-
- # Add tags as markers
- if self._preset.properties()["includeTags"]:
- self.add_markers(self._sequence, self.otio_timeline.tracks)
-
- def create_OTIO(self):
- self.otio_timeline = otio.schema.Timeline()
-
- # Set global start time based on sequence
- self.otio_timeline.global_start_time = otio.opentime.RationalTime(
- self._sequence.timecodeStart(),
- self._sequence.framerate().toFloat()
- )
- self.otio_timeline.name = self._sequence.name()
-
- self.add_tracks()
-
def startTask(self):
- self.create_OTIO()
+ self.otio_timeline = hiero_export.create_otio_timeline()
def taskStep(self):
return False
@@ -350,7 +40,7 @@ class OTIOExportTask(hiero.core.TaskBase):
util.filesystem.makeDirs(dirname)
# write otio file
- otio.adapters.write_to_file(self.otio_timeline, exportPath)
+ hiero_export.write_to_file(self.otio_timeline, exportPath)
# Catch all exceptions and log error
except Exception as e:
@@ -370,7 +60,7 @@ class OTIOExportPreset(hiero.core.TaskPresetBase):
"""Initialise presets to default values"""
hiero.core.TaskPresetBase.__init__(self, OTIOExportTask, name)
- self.properties()["includeTags"] = True
+ self.properties()["includeTags"] = hiero_export.include_tags = True
self.properties().update(properties)
def supportedItems(self):
diff --git a/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportUI.py b/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportUI.py
index 887ff05ec8..9b83eefedf 100644
--- a/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportUI.py
+++ b/openpype/hosts/hiero/startup/Python/Startup/otioexporter/OTIOExportUI.py
@@ -1,3 +1,9 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+__author__ = "Daniel Flehner Heen"
+__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
+
import hiero.ui
import OTIOExportTask
@@ -14,6 +20,7 @@ except ImportError:
FormLayout = QFormLayout # lint:ok
+from openpype.hosts.hiero.otio import hiero_export
class OTIOExportUI(hiero.ui.TaskUIBase):
def __init__(self, preset):
@@ -27,7 +34,7 @@ class OTIOExportUI(hiero.ui.TaskUIBase):
def includeMarkersCheckboxChanged(self, state):
# Slot to handle change of checkbox state
- self._preset.properties()["includeTags"] = state == QtCore.Qt.Checked
+ hiero_export.include_tags = state == QtCore.Qt.Checked
def populateUI(self, widget, exportTemplate):
layout = widget.layout()
diff --git a/openpype/hosts/hiero/startup/Python/Startup/otioexporter/__init__.py b/openpype/hosts/hiero/startup/Python/Startup/otioexporter/__init__.py
index 67e6e78d35..3c09655f01 100644
--- a/openpype/hosts/hiero/startup/Python/Startup/otioexporter/__init__.py
+++ b/openpype/hosts/hiero/startup/Python/Startup/otioexporter/__init__.py
@@ -1,25 +1,3 @@
-# MIT License
-#
-# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
from OTIOExportTask import OTIOExportTask
from OTIOExportUI import OTIOExportUI
diff --git a/openpype/hosts/hiero/startup/Python/StartupUI/otioimporter/__init__.py b/openpype/hosts/hiero/startup/Python/StartupUI/otioimporter/__init__.py
index 1503a9e9ac..0f0a643909 100644
--- a/openpype/hosts/hiero/startup/Python/StartupUI/otioimporter/__init__.py
+++ b/openpype/hosts/hiero/startup/Python/StartupUI/otioimporter/__init__.py
@@ -1,42 +1,91 @@
-# MIT License
-#
-# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in
-# all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+__author__ = "Daniel Flehner Heen"
+__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
import hiero.ui
import hiero.core
-from otioimporter.OTIOImport import load_otio
+import PySide2.QtWidgets as qw
+
+from openpype.hosts.hiero.otio.hiero_import import load_otio
+
+
+class OTIOProjectSelect(qw.QDialog):
+
+ def __init__(self, projects, *args, **kwargs):
+ super(OTIOProjectSelect, self).__init__(*args, **kwargs)
+ self.setWindowTitle('Please select active project')
+ self.layout = qw.QVBoxLayout()
+
+ self.label = qw.QLabel(
+ 'Unable to determine which project to import sequence to.\n'
+ 'Please select one.'
+ )
+ self.layout.addWidget(self.label)
+
+ self.projects = qw.QComboBox()
+ self.projects.addItems(map(lambda p: p.name(), projects))
+ self.layout.addWidget(self.projects)
+
+ QBtn = qw.QDialogButtonBox.Ok | qw.QDialogButtonBox.Cancel
+ self.buttonBox = qw.QDialogButtonBox(QBtn)
+ self.buttonBox.accepted.connect(self.accept)
+ self.buttonBox.rejected.connect(self.reject)
+
+ self.layout.addWidget(self.buttonBox)
+ self.setLayout(self.layout)
+
+
+def get_sequence(view):
+ sequence = None
+ if isinstance(view, hiero.ui.TimelineEditor):
+ sequence = view.sequence()
+
+ elif isinstance(view, hiero.ui.BinView):
+ for item in view.selection():
+ if not hasattr(item, 'acitveItem'):
+ continue
+
+ if isinstance(item.activeItem(), hiero.core.Sequence):
+ sequence = item.activeItem()
+
+ return sequence
def OTIO_menu_action(event):
- otio_action = hiero.ui.createMenuAction(
- 'Import OTIO',
+ # Menu actions
+ otio_import_action = hiero.ui.createMenuAction(
+ 'Import OTIO...',
open_otio_file,
icon=None
)
- hiero.ui.registerAction(otio_action)
+
+ otio_add_track_action = hiero.ui.createMenuAction(
+ 'New Track(s) from OTIO...',
+ open_otio_file,
+ icon=None
+ )
+ otio_add_track_action.setEnabled(False)
+
+ hiero.ui.registerAction(otio_import_action)
+ hiero.ui.registerAction(otio_add_track_action)
+
+ view = hiero.ui.currentContextMenuView()
+
+ if view:
+ sequence = get_sequence(view)
+ if sequence:
+ otio_add_track_action.setEnabled(True)
+
for action in event.menu.actions():
if action.text() == 'Import':
- action.menu().addAction(otio_action)
- break
+ action.menu().addAction(otio_import_action)
+ action.menu().addAction(otio_add_track_action)
+
+ elif action.text() == 'New Track':
+ action.menu().addAction(otio_add_track_action)
def open_otio_file():
@@ -45,8 +94,39 @@ def open_otio_file():
pattern='*.otio',
requiredExtension='.otio'
)
+
+ selection = None
+ sequence = None
+
+ view = hiero.ui.currentContextMenuView()
+ if view:
+ sequence = get_sequence(view)
+ selection = view.selection()
+
+ if sequence:
+ project = sequence.project()
+
+ elif selection:
+ project = selection[0].project()
+
+ elif len(hiero.core.projects()) > 1:
+ dialog = OTIOProjectSelect(hiero.core.projects())
+ if dialog.exec_():
+ project = hiero.core.projects()[dialog.projects.currentIndex()]
+
+ else:
+ bar = hiero.ui.mainWindow().statusBar()
+ bar.showMessage(
+ 'OTIO Import aborted by user',
+ timeout=3000
+ )
+ return
+
+ else:
+ project = hiero.core.projects()[-1]
+
for otio_file in files:
- load_otio(otio_file)
+ load_otio(otio_file, project, sequence)
# HieroPlayer is quite limited and can't create transitions etc.
@@ -55,3 +135,7 @@ if not hiero.core.isHieroPlayer():
"kShowContextMenu/kBin",
OTIO_menu_action
)
+ hiero.core.events.registerInterest(
+ "kShowContextMenu/kTimeline",
+ OTIO_menu_action
+ )
diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py
index dd586ca02d..1f0f90811f 100644
--- a/openpype/hosts/houdini/api/lib.py
+++ b/openpype/hosts/houdini/api/lib.py
@@ -210,7 +210,7 @@ def validate_fps():
if current_fps != fps:
- from ...widgets import popup
+ from openpype.widgets import popup
# Find main window
parent = hou.ui.mainQtWindow()
@@ -219,8 +219,8 @@ def validate_fps():
else:
dialog = popup.Popup2(parent=parent)
dialog.setModal(True)
- dialog.setWindowTitle("Maya scene not in line with project")
- dialog.setMessage("The FPS is out of sync, please fix")
+ dialog.setWindowTitle("Houdini scene not in line with project")
+ dialog.setMessage("The FPS is out of sync, please fix it")
# Set new text for button (add optional argument for the popup?)
toggle = dialog.widgets["toggle"]
diff --git a/openpype/hosts/houdini/startup/MainMenuCommon.XML b/openpype/hosts/houdini/startup/MainMenuCommon.xml
similarity index 100%
rename from openpype/hosts/houdini/startup/MainMenuCommon.XML
rename to openpype/hosts/houdini/startup/MainMenuCommon.xml
diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py
index ae2d329a97..a83ff98c99 100644
--- a/openpype/hosts/maya/api/lib.py
+++ b/openpype/hosts/maya/api/lib.py
@@ -1872,7 +1872,7 @@ def set_context_settings():
# Set project fps
fps = asset_data.get("fps", project_data.get("fps", 25))
- api.Session["AVALON_FPS"] = fps
+ api.Session["AVALON_FPS"] = str(fps)
set_scene_fps(fps)
# Set project resolution
diff --git a/openpype/hosts/maya/plugins/create/create_redshift_proxy.py b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py
new file mode 100644
index 0000000000..419a8d99d4
--- /dev/null
+++ b/openpype/hosts/maya/plugins/create/create_redshift_proxy.py
@@ -0,0 +1,23 @@
+# -*- coding: utf-8 -*-
+"""Creator of Redshift proxy subset types."""
+
+from openpype.hosts.maya.api import plugin, lib
+
+
+class CreateRedshiftProxy(plugin.Creator):
+ """Create instance of Redshift Proxy subset."""
+
+ name = "redshiftproxy"
+ label = "Redshift Proxy"
+ family = "redshiftproxy"
+ icon = "gears"
+
+ def __init__(self, *args, **kwargs):
+ super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
+
+ animation_data = lib.collect_animation_data()
+
+ self.data["animation"] = False
+ self.data["proxyFrameStart"] = animation_data["frameStart"]
+ self.data["proxyFrameEnd"] = animation_data["frameEnd"]
+ self.data["proxyFrameStep"] = animation_data["step"]
diff --git a/openpype/hosts/maya/plugins/load/load_look.py b/openpype/hosts/maya/plugins/load/load_look.py
index 4392d1f78d..c39bbc497e 100644
--- a/openpype/hosts/maya/plugins/load/load_look.py
+++ b/openpype/hosts/maya/plugins/load/load_look.py
@@ -105,7 +105,23 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# Load relationships
shader_relation = api.get_representation_path(json_representation)
with open(shader_relation, "r") as f:
- relationships = json.load(f)
+ json_data = json.load(f)
+
+ for rel, data in json_data["relationships"].items():
+ # process only non-shading nodes
+ current_node = "{}:{}".format(container["namespace"], rel)
+ if current_node in shader_nodes:
+ continue
+ print("processing {}".format(rel))
+ current_members = set(cmds.ls(
+ cmds.sets(current_node, query=True) or [], long=True))
+ new_members = {"{}".format(
+ m["name"]) for m in data["members"] or []}
+ dif = new_members.difference(current_members)
+
+ # add to set
+ cmds.sets(
+ dif, forceElement="{}:{}".format(container["namespace"], rel))
# update of reference could result in failed edits - material is not
# present because of renaming etc.
@@ -120,7 +136,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
cmds.file(cr=reference_node) # cleanReference
# reapply shading groups from json representation on orig nodes
- openpype.hosts.maya.api.lib.apply_shaders(relationships,
+ openpype.hosts.maya.api.lib.apply_shaders(json_data,
shader_nodes,
orig_nodes)
@@ -128,12 +144,13 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"All successful edits were kept intact.\n",
"Failed and removed edits:"]
msg.extend(failed_edits)
+
msg = ScrollMessageBox(QtWidgets.QMessageBox.Warning,
"Some reference edit failed",
msg)
msg.exec_()
- attributes = relationships.get("attributes", [])
+ attributes = json_data.get("attributes", [])
# region compute lookup
nodes_by_id = defaultdict(list)
diff --git a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py
new file mode 100644
index 0000000000..4c6a187bc3
--- /dev/null
+++ b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py
@@ -0,0 +1,146 @@
+# -*- coding: utf-8 -*-
+"""Loader for Redshift proxy."""
+from avalon.maya import lib
+from avalon import api
+from openpype.api import get_project_settings
+import os
+import maya.cmds as cmds
+import clique
+
+
+class RedshiftProxyLoader(api.Loader):
+ """Load Redshift proxy"""
+
+ families = ["redshiftproxy"]
+ representations = ["rs"]
+
+ label = "Import Redshift Proxy"
+ order = -10
+ icon = "code-fork"
+ color = "orange"
+
+ def load(self, context, name=None, namespace=None, options=None):
+ """Plugin entry point."""
+
+ from avalon.maya.pipeline import containerise
+ from openpype.hosts.maya.api.lib import namespaced
+
+ try:
+ family = context["representation"]["context"]["family"]
+ except ValueError:
+ family = "redshiftproxy"
+
+ asset_name = context['asset']["name"]
+ namespace = namespace or lib.unique_namespace(
+ asset_name + "_",
+ prefix="_" if asset_name[0].isdigit() else "",
+ suffix="_",
+ )
+
+ # Ensure Redshift for Maya is loaded.
+ cmds.loadPlugin("redshift4maya", quiet=True)
+
+ with lib.maintained_selection():
+ cmds.namespace(addNamespace=namespace)
+ with namespaced(namespace, new=False):
+ nodes, group_node = self.create_rs_proxy(
+ name, self.fname)
+
+ self[:] = nodes
+ if not nodes:
+ return
+
+ # colour the group node
+ settings = get_project_settings(os.environ['AVALON_PROJECT'])
+ colors = settings['maya']['load']['colors']
+ c = colors.get(family)
+ if c is not None:
+ cmds.setAttr("{0}.useOutlinerColor".format(group_node), 1)
+ cmds.setAttr("{0}.outlinerColor".format(group_node),
+ c[0], c[1], c[2])
+
+ return containerise(
+ name=name,
+ namespace=namespace,
+ nodes=nodes,
+ context=context,
+ loader=self.__class__.__name__)
+
+ def update(self, container, representation):
+
+ node = container['objectName']
+ assert cmds.objExists(node), "Missing container"
+
+ members = cmds.sets(node, query=True) or []
+ rs_meshes = cmds.ls(members, type="RedshiftProxyMesh")
+ assert rs_meshes, "Cannot find RedshiftProxyMesh in container"
+
+ filename = api.get_representation_path(representation)
+
+ for rs_mesh in rs_meshes:
+ cmds.setAttr("{}.fileName".format(rs_mesh),
+ filename,
+ type="string")
+
+ # Update metadata
+ cmds.setAttr("{}.representation".format(node),
+ str(representation["_id"]),
+ type="string")
+
+ def remove(self, container):
+
+ # Delete container and its contents
+ if cmds.objExists(container['objectName']):
+ members = cmds.sets(container['objectName'], query=True) or []
+ cmds.delete([container['objectName']] + members)
+
+ # Remove the namespace, if empty
+ namespace = container['namespace']
+ if cmds.namespace(exists=namespace):
+ members = cmds.namespaceInfo(namespace, listNamespace=True)
+ if not members:
+ cmds.namespace(removeNamespace=namespace)
+ else:
+ self.log.warning("Namespace not deleted because it "
+ "still has members: %s", namespace)
+
+ def switch(self, container, representation):
+ self.update(container, representation)
+
+ def create_rs_proxy(self, name, path):
+ """Creates Redshift Proxies showing a proxy object.
+
+ Args:
+ name (str): Proxy name.
+ path (str): Path to proxy file.
+
+ Returns:
+ (str, str): Name of mesh with Redshift proxy and its parent
+ transform.
+
+ """
+ rs_mesh = cmds.createNode(
+ 'RedshiftProxyMesh', name="{}_RS".format(name))
+ mesh_shape = cmds.createNode("mesh", name="{}_GEOShape".format(name))
+
+ cmds.setAttr("{}.fileName".format(rs_mesh),
+ path,
+ type="string")
+
+ cmds.connectAttr("{}.outMesh".format(rs_mesh),
+ "{}.inMesh".format(mesh_shape))
+
+ group_node = cmds.group(empty=True, name="{}_GRP".format(name))
+ mesh_transform = cmds.listRelatives(mesh_shape,
+ parent=True, fullPath=True)
+ cmds.parent(mesh_transform, group_node)
+ nodes = [rs_mesh, mesh_shape, group_node]
+
+ # determine if we need to enable animation support
+ files_in_folder = os.listdir(os.path.dirname(path))
+ collections, remainder = clique.assemble(files_in_folder)
+
+ if collections:
+ cmds.setAttr("{}.useFrameExtension".format(rs_mesh), 1)
+
+ return nodes, group_node
diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py
index acc6d8f128..bf24b463ac 100644
--- a/openpype/hosts/maya/plugins/publish/collect_look.py
+++ b/openpype/hosts/maya/plugins/publish/collect_look.py
@@ -1,8 +1,10 @@
+# -*- coding: utf-8 -*-
+"""Maya look collector."""
import re
import os
import glob
-from maya import cmds
+from maya import cmds # noqa
import pyblish.api
from openpype.hosts.maya.api import lib
@@ -16,6 +18,11 @@ SHAPE_ATTRS = ["castsShadows",
"doubleSided",
"opposite"]
+RENDERER_NODE_TYPES = [
+ # redshift
+ "RedshiftMeshParameters"
+]
+
SHAPE_ATTRS = set(SHAPE_ATTRS)
@@ -29,7 +36,6 @@ def get_look_attrs(node):
list: Attribute names to extract
"""
-
# When referenced get only attributes that are "changed since file open"
# which includes any reference edits, otherwise take *all* user defined
# attributes
@@ -219,9 +225,13 @@ class CollectLook(pyblish.api.InstancePlugin):
with lib.renderlayer(instance.data["renderlayer"]):
self.collect(instance)
-
def collect(self, instance):
+ """Collect looks.
+ Args:
+ instance: Instance to collect.
+
+ """
self.log.info("Looking for look associations "
"for %s" % instance.data['name'])
@@ -235,48 +245,91 @@ class CollectLook(pyblish.api.InstancePlugin):
self.log.info("Gathering set relations..")
# Ensure iteration happen in a list so we can remove keys from the
# dict within the loop
- for objset in list(sets):
- self.log.debug("From %s.." % objset)
+
+ # skipped types of attribute on render specific nodes
+ disabled_types = ["message", "TdataCompound"]
+
+ for obj_set in list(sets):
+ self.log.debug("From {}".format(obj_set))
+
+ # if node is specified as renderer node type, it will be
+ # serialized with its attributes.
+ if cmds.nodeType(obj_set) in RENDERER_NODE_TYPES:
+ self.log.info("- {} is {}".format(
+ obj_set, cmds.nodeType(obj_set)))
+
+ node_attrs = []
+
+ # serialize its attributes so they can be recreated on look
+ # load.
+ for attr in cmds.listAttr(obj_set):
+ # skip publishedNodeInfo attributes as they break
+ # getAttr() and we don't need them anyway
+ if attr.startswith("publishedNodeInfo"):
+ continue
+
+ # skip attributes types defined in 'disabled_type' list
+ if cmds.getAttr("{}.{}".format(obj_set, attr), type=True) in disabled_types: # noqa
+ continue
+
+ node_attrs.append((
+ attr,
+ cmds.getAttr("{}.{}".format(obj_set, attr)),
+ cmds.getAttr(
+ "{}.{}".format(obj_set, attr), type=True)
+ ))
+
+ for member in cmds.ls(
+ cmds.sets(obj_set, query=True), long=True):
+ member_data = self.collect_member_data(member,
+ instance_lookup)
+ if not member_data:
+ continue
+
+ # Add information of the node to the members list
+ sets[obj_set]["members"].append(member_data)
# Get all nodes of the current objectSet (shadingEngine)
- for member in cmds.ls(cmds.sets(objset, query=True), long=True):
+ for member in cmds.ls(cmds.sets(obj_set, query=True), long=True):
member_data = self.collect_member_data(member,
instance_lookup)
if not member_data:
continue
# Add information of the node to the members list
- sets[objset]["members"].append(member_data)
+ sets[obj_set]["members"].append(member_data)
# Remove sets that didn't have any members assigned in the end
# Thus the data will be limited to only what we need.
- self.log.info("objset {}".format(sets[objset]))
- if not sets[objset]["members"] or (not objset.endswith("SG")):
- self.log.info("Removing redundant set information: "
- "%s" % objset)
- sets.pop(objset, None)
+ self.log.info("obj_set {}".format(sets[obj_set]))
+ if not sets[obj_set]["members"]:
+ self.log.info(
+ "Removing redundant set information: {}".format(obj_set))
+ sets.pop(obj_set, None)
self.log.info("Gathering attribute changes to instance members..")
attributes = self.collect_attributes_changed(instance)
# Store data on the instance
- instance.data["lookData"] = {"attributes": attributes,
- "relationships": sets}
+ instance.data["lookData"] = {
+ "attributes": attributes,
+ "relationships": sets
+ }
# Collect file nodes used by shading engines (if we have any)
- files = list()
- looksets = sets.keys()
- shaderAttrs = [
- "surfaceShader",
- "volumeShader",
- "displacementShader",
- "aiSurfaceShader",
- "aiVolumeShader"]
- materials = list()
+ files = []
+ look_sets = sets.keys()
+ shader_attrs = [
+ "surfaceShader",
+ "volumeShader",
+ "displacementShader",
+ "aiSurfaceShader",
+ "aiVolumeShader"]
+ if look_sets:
+ materials = []
- if looksets:
- for look in looksets:
- for at in shaderAttrs:
+ for look in look_sets:
+ for at in shader_attrs:
try:
con = cmds.listConnections("{}.{}".format(look, at))
except ValueError:
@@ -289,12 +342,19 @@ class CollectLook(pyblish.api.InstancePlugin):
self.log.info("Found materials:\n{}".format(materials))
- self.log.info("Found the following sets:\n{}".format(looksets))
+ self.log.info("Found the following sets:\n{}".format(look_sets))
# Get the entire node chain of the look sets
- # history = cmds.listHistory(looksets)
- history = list()
+ # history = cmds.listHistory(look_sets)
+ history = []
for material in materials:
history.extend(cmds.listHistory(material))
+
+ # handle VrayPluginNodeMtl node - see #1397
+ vray_plugin_nodes = cmds.ls(
+ history, type="VRayPluginNodeMtl", long=True)
+ for vray_node in vray_plugin_nodes:
+ history.extend(cmds.listHistory(vray_node))
+
files = cmds.ls(history, type="file", long=True)
files.extend(cmds.ls(history, type="aiImage", long=True))
@@ -313,7 +373,7 @@ class CollectLook(pyblish.api.InstancePlugin):
# Ensure unique shader sets
# Add shader sets to the instance for unify ID validation
- instance.extend(shader for shader in looksets if shader
+ instance.extend(shader for shader in look_sets if shader
not in instance_lookup)
self.log.info("Collected look for %s" % instance)
@@ -331,7 +391,7 @@ class CollectLook(pyblish.api.InstancePlugin):
dict
"""
- sets = dict()
+ sets = {}
for node in instance:
related_sets = lib.get_related_sets(node)
if not related_sets:
@@ -427,6 +487,11 @@ class CollectLook(pyblish.api.InstancePlugin):
"""
self.log.debug("processing: {}".format(node))
+ if cmds.nodeType(node) not in ["file", "aiImage"]:
+ self.log.error(
+ "Unsupported file node: {}".format(cmds.nodeType(node)))
+ raise AssertionError("Unsupported file node")
+
if cmds.nodeType(node) == 'file':
self.log.debug(" - file node")
attribute = "{}.fileTextureName".format(node)
@@ -435,6 +500,7 @@ class CollectLook(pyblish.api.InstancePlugin):
self.log.debug("aiImage node")
attribute = "{}.filename".format(node)
computed_attribute = attribute
+
source = cmds.getAttr(attribute)
self.log.info(" - file source: {}".format(source))
color_space_attr = "{}.colorSpace".format(node)
diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py
index 75749a952e..647a46e240 100644
--- a/openpype/hosts/maya/plugins/publish/collect_render.py
+++ b/openpype/hosts/maya/plugins/publish/collect_render.py
@@ -358,9 +358,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
options["extendFrames"] = extend_frames
options["overrideExistingFrame"] = override_frames
- maya_render_plugin = "MayaPype"
- if attributes.get("useMayaBatch", True):
- maya_render_plugin = "MayaBatch"
+ maya_render_plugin = "MayaBatch"
options["mayaRenderPlugin"] = maya_render_plugin
diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py
index 79488a372c..bdd061578e 100644
--- a/openpype/hosts/maya/plugins/publish/extract_look.py
+++ b/openpype/hosts/maya/plugins/publish/extract_look.py
@@ -1,13 +1,14 @@
+# -*- coding: utf-8 -*-
+"""Maya look extractor."""
import os
import sys
import json
-import copy
import tempfile
import contextlib
import subprocess
from collections import OrderedDict
-from maya import cmds
+from maya import cmds # noqa
import pyblish.api
import avalon.maya
@@ -22,23 +23,38 @@ HARDLINK = 2
def find_paths_by_hash(texture_hash):
- # Find the texture hash key in the dictionary and all paths that
- # originate from it.
+ """Find the texture hash key in the dictionary.
+
+ All paths that originate from it.
+
+ Args:
+ texture_hash (str): Hash of the texture.
+
+ Return:
+ str: path to texture if found.
+
+ """
key = "data.sourceHashes.{0}".format(texture_hash)
return io.distinct(key, {"type": "version"})
def maketx(source, destination, *args):
- """Make .tx using maketx with some default settings.
+ """Make `.tx` using `maketx` with some default settings.
+
The settings are based on default as used in Arnold's
txManager in the scene.
This function requires the `maketx` executable to be
on the `PATH`.
+
Args:
source (str): Path to source file.
destination (str): Writing destination path.
- """
+ *args: Additional arguments for `maketx`.
+ Returns:
+ str: Output of `maketx` command.
+
+ """
cmd = [
"maketx",
"-v", # verbose
@@ -56,7 +72,7 @@ def maketx(source, destination, *args):
cmd = " ".join(cmd)
- CREATE_NO_WINDOW = 0x08000000
+ CREATE_NO_WINDOW = 0x08000000 # noqa
kwargs = dict(args=cmd, stderr=subprocess.STDOUT)
if sys.platform == "win32":
@@ -118,12 +134,58 @@ class ExtractLook(openpype.api.Extractor):
hosts = ["maya"]
families = ["look"]
order = pyblish.api.ExtractorOrder + 0.2
+ scene_type = "ma"
+
+ @staticmethod
+ def get_renderer_name():
+ """Get renderer name from Maya.
+
+ Returns:
+ str: Renderer name.
+
+ """
+ renderer = cmds.getAttr(
+ "defaultRenderGlobals.currentRenderer"
+ ).lower()
+ # handle various renderman names
+ if renderer.startswith("renderman"):
+ renderer = "renderman"
+ return renderer
+
+ def get_maya_scene_type(self, instance):
+ """Get Maya scene type from settings.
+
+ Args:
+ instance (pyblish.api.Instance): Instance with collected
+ project settings.
+
+ """
+ ext_mapping = (
+ instance.context.data["project_settings"]["maya"]["ext_mapping"]
+ )
+ if ext_mapping:
+ self.log.info("Looking in settings for scene type ...")
+ # use extension mapping for first family found
+ for family in self.families:
+ try:
+ self.scene_type = ext_mapping[family]
+ self.log.info(
+ "Using {} as scene type".format(self.scene_type))
+ break
+ except KeyError:
+ # no preset found
+ pass
def process(self, instance):
+ """Plugin entry point.
+ Args:
+ instance: Instance to process.
+
+ """
# Define extract output file path
dir_path = self.staging_dir(instance)
- maya_fname = "{0}.ma".format(instance.name)
+ maya_fname = "{0}.{1}".format(instance.name, self.scene_type)
json_fname = "{0}.json".format(instance.name)
# Make texture dump folder
@@ -148,7 +210,7 @@ class ExtractLook(openpype.api.Extractor):
# Collect all unique files used in the resources
files = set()
- files_metadata = dict()
+ files_metadata = {}
for resource in resources:
# Preserve color space values (force value after filepath change)
# This will also trigger in the same order at end of context to
@@ -162,35 +224,33 @@ class ExtractLook(openpype.api.Extractor):
# files.update(os.path.normpath(f))
# Process the resource files
- transfers = list()
- hardlinks = list()
- hashes = dict()
- forceCopy = instance.data.get("forceCopy", False)
+ transfers = []
+ hardlinks = []
+ hashes = {}
+ force_copy = instance.data.get("forceCopy", False)
self.log.info(files)
for filepath in files_metadata:
- cspace = files_metadata[filepath]["color_space"]
- linearise = False
- if do_maketx:
- if cspace == "sRGB":
- linearise = True
- # set its file node to 'raw' as tx will be linearized
- files_metadata[filepath]["color_space"] = "raw"
+ linearize = False
+ if do_maketx and files_metadata[filepath]["color_space"] == "sRGB": # noqa: E501
+ linearize = True
+ # set its file node to 'raw' as tx will be linearized
+ files_metadata[filepath]["color_space"] = "raw"
- source, mode, hash = self._process_texture(
+ source, mode, texture_hash = self._process_texture(
filepath,
do_maketx,
staging=dir_path,
- linearise=linearise,
- force=forceCopy
+ linearize=linearize,
+ force=force_copy
)
destination = self.resource_destination(instance,
source,
do_maketx)
# Force copy is specified.
- if forceCopy:
+ if force_copy:
mode = COPY
if mode == COPY:
@@ -202,10 +262,10 @@ class ExtractLook(openpype.api.Extractor):
# Store the hashes from hash to destination to include in the
# database
- hashes[hash] = destination
+ hashes[texture_hash] = destination
# Remap the resources to the destination path (change node attributes)
- destinations = dict()
+ destinations = {}
remap = OrderedDict() # needs to be ordered, see color space values
for resource in resources:
source = os.path.normpath(resource["source"])
@@ -222,7 +282,7 @@ class ExtractLook(openpype.api.Extractor):
color_space_attr = resource["node"] + ".colorSpace"
color_space = cmds.getAttr(color_space_attr)
if files_metadata[source]["color_space"] == "raw":
- # set colorpsace to raw if we linearized it
+ # set color space to raw if we linearized it
color_space = "Raw"
# Remap file node filename to destination
attr = resource["attribute"]
@@ -267,11 +327,11 @@ class ExtractLook(openpype.api.Extractor):
json.dump(data, f)
if "files" not in instance.data:
- instance.data["files"] = list()
+ instance.data["files"] = []
if "hardlinks" not in instance.data:
- instance.data["hardlinks"] = list()
+ instance.data["hardlinks"] = []
if "transfers" not in instance.data:
- instance.data["transfers"] = list()
+ instance.data["transfers"] = []
instance.data["files"].append(maya_fname)
instance.data["files"].append(json_fname)
@@ -311,14 +371,26 @@ class ExtractLook(openpype.api.Extractor):
maya_path))
def resource_destination(self, instance, filepath, do_maketx):
- anatomy = instance.context.data["anatomy"]
+ """Get resource destination path.
+ This is utility function to change path if resource file name is
+ changed by some external tool like `maketx`.
+
+ Args:
+ instance: Current Instance.
+ filepath (str): Resource path
+ do_maketx (bool): Flag if resource is processed by `maketx`.
+
+ Returns:
+ str: Path to resource file
+
+ """
resources_dir = instance.data["resourcesDir"]
# Compute destination location
basename, ext = os.path.splitext(os.path.basename(filepath))
- # If maketx then the texture will always end with .tx
+ # If `maketx` then the texture will always end with .tx
if do_maketx:
ext = ".tx"
@@ -326,7 +398,7 @@ class ExtractLook(openpype.api.Extractor):
resources_dir, basename + ext
)
- def _process_texture(self, filepath, do_maketx, staging, linearise, force):
+ def _process_texture(self, filepath, do_maketx, staging, linearize, force):
"""Process a single texture file on disk for publishing.
This will:
1. Check whether it's already published, if so it will do hardlink
@@ -363,7 +435,7 @@ class ExtractLook(openpype.api.Extractor):
# Produce .tx file in staging if source file is not .tx
converted = os.path.join(staging, "resources", fname + ".tx")
- if linearise:
+ if linearize:
self.log.info("tx: converting sRGB -> linear")
colorconvert = "--colorconvert sRGB linear"
else:
diff --git a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
new file mode 100644
index 0000000000..7c9e201986
--- /dev/null
+++ b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py
@@ -0,0 +1,82 @@
+# -*- coding: utf-8 -*-
+"""Redshift Proxy extractor."""
+import os
+
+import avalon.maya
+import openpype.api
+
+from maya import cmds
+
+
+class ExtractRedshiftProxy(openpype.api.Extractor):
+ """Extract the content of the instance to a redshift proxy file."""
+
+ label = "Redshift Proxy (.rs)"
+ hosts = ["maya"]
+ families = ["redshiftproxy"]
+
+ def process(self, instance):
+ """Extractor entry point."""
+
+ staging_dir = self.staging_dir(instance)
+ file_name = "{}.rs".format(instance.name)
+ file_path = os.path.join(staging_dir, file_name)
+
+ anim_on = instance.data["animation"]
+ rs_options = "exportConnectivity=0;enableCompression=1;keepUnused=0;"
+ repr_files = file_name
+
+ if not anim_on:
+ # Remove animation information because it is not required for
+ # non-animated subsets
+ instance.data.pop("proxyFrameStart", None)
+ instance.data.pop("proxyFrameEnd", None)
+
+ else:
+ start_frame = instance.data["proxyFrameStart"]
+ end_frame = instance.data["proxyFrameEnd"]
+ rs_options = "{}startFrame={};endFrame={};frameStep={};".format(
+ rs_options, start_frame,
+ end_frame, instance.data["proxyFrameStep"]
+ )
+
+ root, ext = os.path.splitext(file_path)
+ # Padding is taken from number of digits of the end_frame.
+ # Not sure where Redshift is taking it.
+ repr_files = [
+ "{}.{}{}".format(root, str(frame).rjust(4, "0"), ext) # noqa: E501
+ for frame in range(
+ int(start_frame),
+ int(end_frame) + 1,
+ int(instance.data["proxyFrameStep"]),
+ )]
+ # vertex_colors = instance.data.get("vertexColors", False)
+
+ # Write out rs file
+ self.log.info("Writing: '%s'" % file_path)
+ with avalon.maya.maintained_selection():
+ cmds.select(instance.data["setMembers"], noExpand=True)
+ cmds.file(file_path,
+ pr=False,
+ force=True,
+ type="Redshift Proxy",
+ exportSelected=True,
+ options=rs_options)
+
+ if "representations" not in instance.data:
+ instance.data["representations"] = []
+
+ self.log.debug("Files: {}".format(repr_files))
+
+ representation = {
+ 'name': 'rs',
+ 'ext': 'rs',
+ 'files': repr_files,
+ "stagingDir": staging_dir,
+ }
+ if anim_on:
+ representation["frameStart"] = instance.data["proxyFrameStart"]
+ instance.data["representations"].append(representation)
+
+ self.log.info("Extracted instance '%s' to: %s"
+ % (instance.name, staging_dir))
diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py
index d3a3df6b1c..c9edfc8343 100644
--- a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py
+++ b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py
@@ -5,7 +5,7 @@ import re
import avalon.maya
import openpype.api
-from openpype.hosts.maya.render_setup_tools import export_in_rs_layer
+from openpype.hosts.maya.api.render_setup_tools import export_in_rs_layer
from maya import cmds
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_sets.py b/openpype/hosts/maya/plugins/publish/validate_look_sets.py
index 48431d0906..5e737ca876 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_sets.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_sets.py
@@ -73,8 +73,10 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
# check if any objectSets are not present ion the relationships
missing_sets = [s for s in sets if s not in relationships]
if missing_sets:
- for set in missing_sets:
- if '_SET' not in set:
+ for missing_set in missing_sets:
+ cls.log.debug(missing_set)
+
+ if '_SET' not in missing_set:
# A set of this node is not coming along, this is wrong!
cls.log.error("Missing sets '{}' for node "
"'{}'".format(missing_sets, node))
@@ -82,8 +84,8 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
continue
# Ensure the node is in the sets that are collected
- for shaderset, data in relationships.items():
- if shaderset not in sets:
+ for shader_set, data in relationships.items():
+ if shader_set not in sets:
# no need to check for a set if the node
# isn't in it anyway
continue
@@ -94,7 +96,7 @@ class ValidateLookSets(pyblish.api.InstancePlugin):
# The node is not found in the collected set
# relationships
cls.log.error("Missing '{}' in collected set node "
- "'{}'".format(node, shaderset))
+ "'{}'".format(node, shader_set))
invalid.append(node)
continue
diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py
index 1c6aa3078e..b2ef174374 100644
--- a/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py
@@ -8,7 +8,7 @@ import openpype.api
class ValidateUnrealMeshTriangulated(pyblish.api.InstancePlugin):
"""Validate if mesh is made of triangles for Unreal Engine"""
- order = openpype.api.ValidateMeshOder
+ order = openpype.api.ValidateMeshOrder
hosts = ["maya"]
families = ["unrealStaticMesh"]
category = "geometry"
diff --git a/openpype/hosts/maya/startup/userSetup.py b/openpype/hosts/maya/startup/userSetup.py
index d556a89fa3..6d27c66882 100644
--- a/openpype/hosts/maya/startup/userSetup.py
+++ b/openpype/hosts/maya/startup/userSetup.py
@@ -10,7 +10,6 @@ print("starting OpenPype usersetup")
settings = get_project_settings(os.environ['AVALON_PROJECT'])
shelf_preset = settings['maya'].get('project_shelf')
-
if shelf_preset:
project = os.environ["AVALON_PROJECT"]
@@ -23,7 +22,7 @@ if shelf_preset:
print(import_string)
exec(import_string)
-cmds.evalDeferred("mlib.shelf(name=shelf_preset['name'], iconPath=icon_path, preset=shelf_preset)")
+ cmds.evalDeferred("mlib.shelf(name=shelf_preset['name'], iconPath=icon_path, preset=shelf_preset)")
print("finished OpenPype usersetup")
diff --git a/openpype/hosts/nuke/api/__init__.py b/openpype/hosts/nuke/api/__init__.py
index c80507e7ea..bd7a95f916 100644
--- a/openpype/hosts/nuke/api/__init__.py
+++ b/openpype/hosts/nuke/api/__init__.py
@@ -106,7 +106,7 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
instance, old_value, new_value))
- from avalon.api.nuke import (
+ from avalon.nuke import (
viewer_update_and_undo_stop,
add_publish_knob
)
diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py
index d95af6ec4c..7ef5401292 100644
--- a/openpype/hosts/nuke/api/lib.py
+++ b/openpype/hosts/nuke/api/lib.py
@@ -1,6 +1,8 @@
import os
import re
import sys
+import six
+import platform
from collections import OrderedDict
@@ -19,7 +21,6 @@ from openpype.api import (
get_hierarchy,
get_asset,
get_current_project_settings,
- config,
ApplicationManager
)
@@ -29,36 +30,34 @@ from .utils import set_context_favorites
log = Logger().get_logger(__name__)
-self = sys.modules[__name__]
-self._project = None
-self.workfiles_launched = False
-self._node_tab_name = "{}".format(os.getenv("AVALON_LABEL") or "Avalon")
+opnl = sys.modules[__name__]
+opnl._project = None
+opnl.project_name = os.getenv("AVALON_PROJECT")
+opnl.workfiles_launched = False
+opnl._node_tab_name = "{}".format(os.getenv("AVALON_LABEL") or "Avalon")
-def get_node_imageio_setting(**kwarg):
+def get_created_node_imageio_setting(**kwarg):
''' Get preset data for dataflow (fileType, compression, bitDepth)
'''
- log.info(kwarg)
- host = str(kwarg.get("host", "nuke"))
+ log.debug(kwarg)
nodeclass = kwarg.get("nodeclass", None)
creator = kwarg.get("creator", None)
- project_name = os.getenv("AVALON_PROJECT")
- assert any([host, nodeclass]), nuke.message(
+ assert any([creator, nodeclass]), nuke.message(
"`{}`: Missing mandatory kwargs `host`, `cls`".format(__file__))
- imageio_nodes = (get_anatomy_settings(project_name)
- ["imageio"]
- .get(host, None)
- ["nodes"]
- ["requiredNodes"]
- )
+ imageio = get_anatomy_settings(opnl.project_name)["imageio"]
+ imageio_nodes = imageio["nuke"]["nodes"]["requiredNodes"]
+ imageio_node = None
for node in imageio_nodes:
log.info(node)
- if node["nukeNodeClass"] == nodeclass:
- if creator in node["plugins"]:
- imageio_node = node
+ if (node["nukeNodeClass"] != nodeclass) and (
+ creator not in node["plugins"]):
+ continue
+
+ imageio_node = node
log.info("ImageIO node: {}".format(imageio_node))
return imageio_node
@@ -67,12 +66,9 @@ def get_node_imageio_setting(**kwarg):
def get_imageio_input_colorspace(filename):
''' Get input file colorspace based on regex in settings.
'''
- imageio_regex_inputs = (get_anatomy_settings(os.getenv("AVALON_PROJECT"))
- ["imageio"]
- ["nuke"]
- ["regexInputs"]
- ["inputs"]
- )
+ imageio_regex_inputs = (
+ get_anatomy_settings(opnl.project_name)
+ ["imageio"]["nuke"]["regexInputs"]["inputs"])
preset_clrsp = None
for regexInput in imageio_regex_inputs:
@@ -104,40 +100,39 @@ def check_inventory_versions():
"""
# get all Loader nodes by avalon attribute metadata
for each in nuke.allNodes():
- if each.Class() == 'Read':
- container = avalon.nuke.parse_container(each)
+ container = avalon.nuke.parse_container(each)
- if container:
- node = nuke.toNode(container["objectName"])
- avalon_knob_data = avalon.nuke.read(
- node)
+ if container:
+ node = nuke.toNode(container["objectName"])
+ avalon_knob_data = avalon.nuke.read(
+ node)
- # get representation from io
- representation = io.find_one({
- "type": "representation",
- "_id": io.ObjectId(avalon_knob_data["representation"])
- })
+ # get representation from io
+ representation = io.find_one({
+ "type": "representation",
+ "_id": io.ObjectId(avalon_knob_data["representation"])
+ })
- # Get start frame from version data
- version = io.find_one({
- "type": "version",
- "_id": representation["parent"]
- })
+ # Get start frame from version data
+ version = io.find_one({
+ "type": "version",
+ "_id": representation["parent"]
+ })
- # get all versions in list
- versions = io.find({
- "type": "version",
- "parent": version["parent"]
- }).distinct('name')
+ # get all versions in list
+ versions = io.find({
+ "type": "version",
+ "parent": version["parent"]
+ }).distinct('name')
- max_version = max(versions)
+ max_version = max(versions)
- # check the available version and do match
- # change color of node if not max verion
- if version.get("name") not in [max_version]:
- node["tile_color"].setValue(int("0xd84f20ff", 16))
- else:
- node["tile_color"].setValue(int("0x4ecd25ff", 16))
+ # check the available version and do match
+ # change color of node if not max verion
+ if version.get("name") not in [max_version]:
+ node["tile_color"].setValue(int("0xd84f20ff", 16))
+ else:
+ node["tile_color"].setValue(int("0x4ecd25ff", 16))
def writes_version_sync():
@@ -153,34 +148,33 @@ def writes_version_sync():
except Exception:
return
- for each in nuke.allNodes():
- if each.Class() == 'Write':
- # check if the node is avalon tracked
- if self._node_tab_name not in each.knobs():
+ for each in nuke.allNodes(filter="Write"):
+ # check if the node is avalon tracked
+ if opnl._node_tab_name not in each.knobs():
+ continue
+
+ avalon_knob_data = avalon.nuke.read(
+ each)
+
+ try:
+ if avalon_knob_data['families'] not in ["render"]:
+ log.debug(avalon_knob_data['families'])
continue
- avalon_knob_data = avalon.nuke.read(
- each)
+ node_file = each['file'].value()
- try:
- if avalon_knob_data['families'] not in ["render"]:
- log.debug(avalon_knob_data['families'])
- continue
+ node_version = "v" + get_version_from_path(node_file)
+ log.debug("node_version: {}".format(node_version))
- node_file = each['file'].value()
-
- node_version = "v" + get_version_from_path(node_file)
- log.debug("node_version: {}".format(node_version))
-
- node_new_file = node_file.replace(node_version, new_version)
- each['file'].setValue(node_new_file)
- if not os.path.isdir(os.path.dirname(node_new_file)):
- log.warning("Path does not exist! I am creating it.")
- os.makedirs(os.path.dirname(node_new_file))
- except Exception as e:
- log.warning(
- "Write node: `{}` has no version in path: {}".format(
- each.name(), e))
+ node_new_file = node_file.replace(node_version, new_version)
+ each['file'].setValue(node_new_file)
+ if not os.path.isdir(os.path.dirname(node_new_file)):
+ log.warning("Path does not exist! I am creating it.")
+ os.makedirs(os.path.dirname(node_new_file))
+ except Exception as e:
+ log.warning(
+ "Write node: `{}` has no version in path: {}".format(
+ each.name(), e))
def version_up_script():
@@ -201,24 +195,22 @@ def check_subsetname_exists(nodes, subset_name):
Returns:
bool: True of False
"""
- result = next((True for n in nodes
- if subset_name in avalon.nuke.read(n).get("subset", "")), False)
- return result
+ return next((True for n in nodes
+ if subset_name in avalon.nuke.read(n).get("subset", "")),
+ False)
def get_render_path(node):
''' Generate Render path from presets regarding avalon knob data
'''
- data = dict()
- data['avalon'] = avalon.nuke.read(
- node)
-
+ data = {'avalon': avalon.nuke.read(node)}
data_preset = {
- "class": data['avalon']['family'],
- "preset": data['avalon']['families']
+ "nodeclass": data['avalon']['family'],
+ "families": [data['avalon']['families']],
+ "creator": data['avalon']['creator']
}
- nuke_imageio_writes = get_node_imageio_setting(**data_preset)
+ nuke_imageio_writes = get_created_node_imageio_setting(**data_preset)
application = lib.get_application(os.environ["AVALON_APP_NAME"])
data.update({
@@ -324,7 +316,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
node (obj): group node with avalon data as Knobs
'''
- imageio_writes = get_node_imageio_setting(**data)
+ imageio_writes = get_created_node_imageio_setting(**data)
app_manager = ApplicationManager()
app_name = os.environ.get("AVALON_APP_NAME")
if app_name:
@@ -367,8 +359,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
# adding dataflow template
log.debug("imageio_writes: `{}`".format(imageio_writes))
for knob in imageio_writes["knobs"]:
- if knob["name"] not in ["_id", "_previous"]:
- _data.update({knob["name"]: knob["value"]})
+ _data.update({knob["name"]: knob["value"]})
_data = anlib.fix_data_for_node_create(_data)
@@ -390,16 +381,19 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
"inputName": input.name()})
prev_node = nuke.createNode(
"Input", "name {}".format(input.name()))
+ prev_node.hideControlPanel()
else:
# generic input node connected to nothing
prev_node = nuke.createNode(
"Input", "name {}".format("rgba"))
+ prev_node.hideControlPanel()
# creating pre-write nodes `prenodes`
if prenodes:
for name, klass, properties, set_output_to in prenodes:
# create node
now_node = nuke.createNode(klass, "name {}".format(name))
+ now_node.hideControlPanel()
# add data to knob
for k, v in properties:
@@ -421,17 +415,21 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
for i, node_name in enumerate(set_output_to):
input_node = nuke.createNode(
"Input", "name {}".format(node_name))
+ input_node.hideControlPanel()
connections.append({
"node": nuke.toNode(node_name),
"inputName": node_name})
now_node.setInput(1, input_node)
+
elif isinstance(set_output_to, str):
input_node = nuke.createNode(
"Input", "name {}".format(node_name))
+ input_node.hideControlPanel()
connections.append({
"node": nuke.toNode(set_output_to),
"inputName": set_output_to})
now_node.setInput(0, input_node)
+
else:
now_node.setInput(0, prev_node)
@@ -443,7 +441,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
"inside_{}".format(name),
**_data
)
-
+ write_node.hideControlPanel()
# connect to previous node
now_node.setInput(0, prev_node)
@@ -451,6 +449,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
prev_node = now_node
now_node = nuke.createNode("Output", "name Output1")
+ now_node.hideControlPanel()
# connect to previous node
now_node.setInput(0, prev_node)
@@ -498,7 +497,7 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
add_deadline_tab(GN)
# open the our Tab as default
- GN[self._node_tab_name].setFlag(0)
+ GN[opnl._node_tab_name].setFlag(0)
# set tile color
tile_color = _data.get("tile_color", "0xff0000ff")
@@ -621,7 +620,7 @@ class WorkfileSettings(object):
root_node=None,
nodes=None,
**kwargs):
- self._project = kwargs.get(
+ opnl._project = kwargs.get(
"project") or io.find_one({"type": "project"})
self._asset = kwargs.get("asset_name") or api.Session["AVALON_ASSET"]
self._asset_entity = get_asset(self._asset)
@@ -664,8 +663,7 @@ class WorkfileSettings(object):
]
erased_viewers = []
- for v in [n for n in self._nodes
- if "Viewer" in n.Class()]:
+ for v in nuke.allNodes(filter="Viewer"):
v['viewerProcess'].setValue(str(viewer_dict["viewerProcess"]))
if str(viewer_dict["viewerProcess"]) \
not in v['viewerProcess'].value():
@@ -709,7 +707,7 @@ class WorkfileSettings(object):
log.error(msg)
nuke.message(msg)
- log.debug(">> root_dict: {}".format(root_dict))
+ log.warning(">> root_dict: {}".format(root_dict))
# first set OCIO
if self._root_node["colorManagement"].value() \
@@ -731,41 +729,41 @@ class WorkfileSettings(object):
# third set ocio custom path
if root_dict.get("customOCIOConfigPath"):
- self._root_node["customOCIOConfigPath"].setValue(
- str(root_dict["customOCIOConfigPath"]).format(
- **os.environ
- ).replace("\\", "/")
- )
- log.debug("nuke.root()['{}'] changed to: {}".format(
- "customOCIOConfigPath", root_dict["customOCIOConfigPath"]))
- root_dict.pop("customOCIOConfigPath")
+ unresolved_path = root_dict["customOCIOConfigPath"]
+ ocio_paths = unresolved_path[platform.system().lower()]
+
+ resolved_path = None
+ for ocio_p in ocio_paths:
+ resolved_path = str(ocio_p).format(**os.environ)
+ if not os.path.exists(resolved_path):
+ continue
+
+ if resolved_path:
+ self._root_node["customOCIOConfigPath"].setValue(
+ str(resolved_path).replace("\\", "/")
+ )
+ log.debug("nuke.root()['{}'] changed to: {}".format(
+ "customOCIOConfigPath", resolved_path))
+ root_dict.pop("customOCIOConfigPath")
# then set the rest
for knob, value in root_dict.items():
+ # skip unfilled ocio config path
+ # it will be dict in value
+ if isinstance(value, dict):
+ continue
if self._root_node[knob].value() not in value:
self._root_node[knob].setValue(str(value))
log.debug("nuke.root()['{}'] changed to: {}".format(
knob, value))
- def set_writes_colorspace(self, write_dict):
+ def set_writes_colorspace(self):
''' Adds correct colorspace to write node dict
- Arguments:
- write_dict (dict): nuke write node as dictionary
-
'''
- # scene will have fixed colorspace following presets for the project
- if not isinstance(write_dict, dict):
- msg = "set_root_colorspace(): argument should be dictionary"
- log.error(msg)
- return
-
from avalon.nuke import read
- for node in nuke.allNodes():
-
- if node.Class() in ["Viewer", "Dot"]:
- continue
+ for node in nuke.allNodes(filter="Group"):
# get data from avalon knob
avalon_knob_data = read(node)
@@ -781,49 +779,63 @@ class WorkfileSettings(object):
if avalon_knob_data.get("families"):
families.append(avalon_knob_data.get("families"))
- # except disabled nodes but exclude backdrops in test
- for fmly, knob in write_dict.items():
- write = None
- if (fmly in families):
- # Add all nodes in group instances.
- if node.Class() == "Group":
- node.begin()
- for x in nuke.allNodes():
- if x.Class() == "Write":
- write = x
- node.end()
- elif node.Class() == "Write":
- write = node
- else:
- log.warning("Wrong write node Class")
+ data_preset = {
+ "nodeclass": avalon_knob_data["family"],
+ "families": families,
+ "creator": avalon_knob_data['creator']
+ }
- write["colorspace"].setValue(str(knob["colorspace"]))
- log.info(
- "Setting `{0}` to `{1}`".format(
- write.name(),
- knob["colorspace"]))
+ nuke_imageio_writes = get_created_node_imageio_setting(
+ **data_preset)
- def set_reads_colorspace(self, reads):
+ log.debug("nuke_imageio_writes: `{}`".format(nuke_imageio_writes))
+
+ if not nuke_imageio_writes:
+ return
+
+ write_node = None
+
+ # get into the group node
+ node.begin()
+ for x in nuke.allNodes():
+ if x.Class() == "Write":
+ write_node = x
+ node.end()
+
+ if not write_node:
+ return
+
+ # write all knobs to node
+ for knob in nuke_imageio_writes["knobs"]:
+ value = knob["value"]
+ if isinstance(value, six.text_type):
+ value = str(value)
+ if str(value).startswith("0x"):
+ value = int(value, 16)
+
+ write_node[knob["name"]].setValue(value)
+
+
+ def set_reads_colorspace(self, read_clrs_inputs):
""" Setting colorspace to Read nodes
Looping trought all read nodes and tries to set colorspace based
on regex rules in presets
"""
- changes = dict()
+ changes = {}
for n in nuke.allNodes():
file = nuke.filename(n)
- if not n.Class() == "Read":
+ if n.Class() != "Read":
continue
- # load nuke presets for Read's colorspace
- read_clrs_presets = config.get_init_presets()["colorspace"].get(
- "nuke", {}).get("read", {})
-
# check if any colorspace presets for read is mathing
- preset_clrsp = next((read_clrs_presets[k]
- for k in read_clrs_presets
- if bool(re.search(k, file))),
- None)
+ preset_clrsp = None
+
+ for input in read_clrs_inputs:
+ if not bool(re.search(input["regex"], file)):
+ continue
+ preset_clrsp = input["colorspace"]
+
log.debug(preset_clrsp)
if preset_clrsp is not None:
current = n["colorspace"].value()
@@ -857,13 +869,15 @@ class WorkfileSettings(object):
def set_colorspace(self):
''' Setting colorpace following presets
'''
- nuke_colorspace = config.get_init_presets(
- )["colorspace"].get("nuke", None)
+ # get imageio
+ imageio = get_anatomy_settings(opnl.project_name)["imageio"]
+ nuke_colorspace = imageio["nuke"]
try:
- self.set_root_colorspace(nuke_colorspace["root"])
+ self.set_root_colorspace(nuke_colorspace["workfile"])
except AttributeError:
- msg = "set_colorspace(): missing `root` settings in template"
+ msg = "set_colorspace(): missing `workfile` settings in template"
+ nuke.message(msg)
try:
self.set_viewers_colorspace(nuke_colorspace["viewer"])
@@ -873,15 +887,14 @@ class WorkfileSettings(object):
log.error(msg)
try:
- self.set_writes_colorspace(nuke_colorspace["write"])
- except AttributeError:
- msg = "set_colorspace(): missing `write` settings in template"
- nuke.message(msg)
- log.error(msg)
+ self.set_writes_colorspace()
+ except AttributeError as _error:
+ nuke.message(_error)
+ log.error(_error)
- reads = nuke_colorspace.get("read")
- if reads:
- self.set_reads_colorspace(reads)
+ read_clrs_inputs = nuke_colorspace["regexInputs"].get("inputs", [])
+ if read_clrs_inputs:
+ self.set_reads_colorspace(read_clrs_inputs)
try:
for key in nuke_colorspace:
@@ -1063,15 +1076,14 @@ class WorkfileSettings(object):
def set_favorites(self):
work_dir = os.getenv("AVALON_WORKDIR")
asset = os.getenv("AVALON_ASSET")
- project = os.getenv("AVALON_PROJECT")
favorite_items = OrderedDict()
# project
# get project's root and split to parts
projects_root = os.path.normpath(work_dir.split(
- project)[0])
+ opnl.project_name)[0])
# add project name
- project_dir = os.path.join(projects_root, project) + "/"
+ project_dir = os.path.join(projects_root, opnl.project_name) + "/"
# add to favorites
favorite_items.update({"Project dir": project_dir.replace("\\", "/")})
@@ -1121,13 +1133,13 @@ def get_write_node_template_attr(node):
data['avalon'] = avalon.nuke.read(
node)
data_preset = {
- "class": data['avalon']['family'],
- "families": data['avalon']['families'],
- "preset": data['avalon']['families'] # omit < 2.0.0v
+ "nodeclass": data['avalon']['family'],
+ "families": [data['avalon']['families']],
+ "creator": data['avalon']['creator']
}
# get template data
- nuke_imageio_writes = get_node_imageio_setting(**data_preset)
+ nuke_imageio_writes = get_created_node_imageio_setting(**data_preset)
# collecting correct data
correct_data = OrderedDict({
@@ -1223,8 +1235,7 @@ class ExporterReview:
"""
anlib.reset_selection()
ipn_orig = None
- for v in [n for n in nuke.allNodes()
- if "Viewer" == n.Class()]:
+ for v in nuke.allNodes(filter="Viewer"):
ip = v['input_process'].getValue()
ipn = v['input_process_node'].getValue()
if "VIEWER_INPUT" not in ipn and ip:
@@ -1637,8 +1648,8 @@ def launch_workfiles_app():
if not open_at_start:
return
- if not self.workfiles_launched:
- self.workfiles_launched = True
+ if not opnl.workfiles_launched:
+ opnl.workfiles_launched = True
workfiles.show(os.environ["AVALON_WORKDIR"])
diff --git a/openpype/hosts/nuke/api/menu.py b/openpype/hosts/nuke/api/menu.py
index 2317066528..021ea04159 100644
--- a/openpype/hosts/nuke/api/menu.py
+++ b/openpype/hosts/nuke/api/menu.py
@@ -26,9 +26,9 @@ def install():
menu.addCommand(
name,
workfiles.show,
- index=(rm_item[0])
+ index=2
)
-
+ menu.addSeparator(index=3)
# replace reset resolution from avalon core to pype's
name = "Reset Resolution"
new_name = "Set Resolution"
@@ -63,16 +63,7 @@ def install():
# add colorspace menu item
name = "Set Colorspace"
menu.addCommand(
- name, lambda: WorkfileSettings().set_colorspace(),
- index=(rm_item[0] + 2)
- )
- log.debug("Adding menu item: {}".format(name))
-
- # add workfile builder menu item
- name = "Build Workfile"
- menu.addCommand(
- name, lambda: BuildWorkfile().process(),
- index=(rm_item[0] + 7)
+ name, lambda: WorkfileSettings().set_colorspace()
)
log.debug("Adding menu item: {}".format(name))
@@ -80,11 +71,20 @@ def install():
name = "Apply All Settings"
menu.addCommand(
name,
- lambda: WorkfileSettings().set_context_settings(),
- index=(rm_item[0] + 3)
+ lambda: WorkfileSettings().set_context_settings()
)
log.debug("Adding menu item: {}".format(name))
+ menu.addSeparator()
+
+ # add workfile builder menu item
+ name = "Build Workfile"
+ menu.addCommand(
+ name, lambda: BuildWorkfile().process()
+ )
+ log.debug("Adding menu item: {}".format(name))
+
+
# adding shortcuts
add_shortcuts_from_presets()
diff --git a/openpype/hosts/nuke/plugins/create/create_write_prerender.py b/openpype/hosts/nuke/plugins/create/create_write_prerender.py
index 38d1a0c2ed..6e1a2ddd96 100644
--- a/openpype/hosts/nuke/plugins/create/create_write_prerender.py
+++ b/openpype/hosts/nuke/plugins/create/create_write_prerender.py
@@ -77,10 +77,14 @@ class CreateWritePrerender(plugin.PypeCreator):
write_data = {
"nodeclass": self.n_class,
"families": [self.family],
- "avalon": self.data,
- "creator": self.__class__.__name__
+ "avalon": self.data
}
+ # add creator data
+ creator_data = {"creator": self.__class__.__name__}
+ self.data.update(creator_data)
+ write_data.update(creator_data)
+
if self.presets.get('fpath_template'):
self.log.info("Adding template path from preset")
write_data.update(
diff --git a/openpype/hosts/nuke/plugins/create/create_write_render.py b/openpype/hosts/nuke/plugins/create/create_write_render.py
index 72f851f19c..04983e9c75 100644
--- a/openpype/hosts/nuke/plugins/create/create_write_render.py
+++ b/openpype/hosts/nuke/plugins/create/create_write_render.py
@@ -80,10 +80,14 @@ class CreateWriteRender(plugin.PypeCreator):
write_data = {
"nodeclass": self.n_class,
"families": [self.family],
- "avalon": self.data,
- "creator": self.__class__.__name__
+ "avalon": self.data
}
+ # add creator data
+ creator_data = {"creator": self.__class__.__name__}
+ self.data.update(creator_data)
+ write_data.update(creator_data)
+
if self.presets.get('fpath_template'):
self.log.info("Adding template path from preset")
write_data.update(
diff --git a/openpype/hosts/nuke/plugins/load/load_mov.py b/openpype/hosts/nuke/plugins/load/load_mov.py
index 92726913af..8b8c5d0c10 100644
--- a/openpype/hosts/nuke/plugins/load/load_mov.py
+++ b/openpype/hosts/nuke/plugins/load/load_mov.py
@@ -135,12 +135,14 @@ class LoadMov(api.Loader):
read_name = self.node_name_template.format(**name_data)
- # Create the Loader with the filename path set
+ read_node = nuke.createNode(
+ "Read",
+ "name {}".format(read_name)
+ )
+
+ # to avoid multiple undo steps for rest of process
+ # we will switch off undo-ing
with viewer_update_and_undo_stop():
- read_node = nuke.createNode(
- "Read",
- "name {}".format(read_name)
- )
read_node["file"].setValue(file)
read_node["origfirst"].setValue(first)
diff --git a/openpype/hosts/nuke/plugins/load/load_sequence.py b/openpype/hosts/nuke/plugins/load/load_sequence.py
index df7aa55cd1..71f0b8c298 100644
--- a/openpype/hosts/nuke/plugins/load/load_sequence.py
+++ b/openpype/hosts/nuke/plugins/load/load_sequence.py
@@ -139,11 +139,15 @@ class LoadSequence(api.Loader):
read_name = self.node_name_template.format(**name_data)
# Create the Loader with the filename path set
+
+ # TODO: it might be universal read to img/geo/camera
+ r = nuke.createNode(
+ "Read",
+ "name {}".format(read_name))
+
+ # to avoid multiple undo steps for rest of process
+ # we will switch off undo-ing
with viewer_update_and_undo_stop():
- # TODO: it might be universal read to img/geo/camera
- r = nuke.createNode(
- "Read",
- "name {}".format(read_name))
r["file"].setValue(file)
# Set colorspace defined in version data
diff --git a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py
index 9c7f1b5e95..4257ed3131 100644
--- a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py
+++ b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py
@@ -34,7 +34,8 @@ class CollectSlate(pyblish.api.InstancePlugin):
if slate_node:
instance.data["slateNode"] = slate_node
instance.data["families"].append("slate")
+ instance.data["versionData"]["families"].append("slate")
self.log.info(
"Slate node is in node graph: `{}`".format(slate.name()))
self.log.debug(
- "__ instance: `{}`".format(instance))
+ "__ instance.data: `{}`".format(instance.data))
diff --git a/openpype/hosts/nuke/plugins/publish/precollect_instances.py b/openpype/hosts/nuke/plugins/publish/precollect_instances.py
index 92f96ea48d..cdb0589525 100644
--- a/openpype/hosts/nuke/plugins/publish/precollect_instances.py
+++ b/openpype/hosts/nuke/plugins/publish/precollect_instances.py
@@ -55,11 +55,6 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
families_ak = avalon_knob_data.get("families", [])
families = list()
- if families_ak:
- families.append(families_ak.lower())
-
- families.append(family)
-
# except disabled nodes but exclude backdrops in test
if ("nukenodes" not in family) and (node["disable"].value()):
continue
@@ -81,36 +76,33 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
# Add all nodes in group instances.
if node.Class() == "Group":
# only alter families for render family
- if "write" in families_ak:
+ if "write" in families_ak.lower():
target = node["render"].value()
if target == "Use existing frames":
# Local rendering
self.log.info("flagged for no render")
- families.append(family)
elif target == "Local":
# Local rendering
self.log.info("flagged for local render")
families.append("{}.local".format(family))
+ family = families_ak.lower()
elif target == "On farm":
# Farm rendering
self.log.info("flagged for farm render")
instance.data["transfer"] = False
families.append("{}.farm".format(family))
-
- # suffle family to `write` as it is main family
- # this will be changed later on in process
- if "render" in families:
- families.remove("render")
- family = "write"
- elif "prerender" in families:
- families.remove("prerender")
- family = "write"
+ family = families_ak.lower()
node.begin()
for i in nuke.allNodes():
instance.append(i)
node.end()
+ if not families and families_ak and family not in [
+ "render", "prerender"]:
+ families.append(families_ak.lower())
+
+ self.log.debug("__ family: `{}`".format(family))
self.log.debug("__ families: `{}`".format(families))
# Get format
@@ -124,7 +116,9 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
anlib.add_publish_knob(node)
# sync workfile version
- if not next((f for f in families
+ _families_test = [family] + families
+ self.log.debug("__ _families_test: `{}`".format(_families_test))
+ if not next((f for f in _families_test
if "prerender" in f),
None) and self.sync_workfile_version:
# get version to instance for integration
diff --git a/openpype/hosts/nuke/plugins/publish/precollect_writes.py b/openpype/hosts/nuke/plugins/publish/precollect_writes.py
index 57303bd42e..5eaac89e84 100644
--- a/openpype/hosts/nuke/plugins/publish/precollect_writes.py
+++ b/openpype/hosts/nuke/plugins/publish/precollect_writes.py
@@ -1,4 +1,5 @@
import os
+import re
import nuke
import pyblish.api
import openpype.api as pype
@@ -14,11 +15,8 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
hosts = ["nuke", "nukeassist"]
families = ["write"]
- # preset attributes
- sync_workfile_version = True
-
def process(self, instance):
- families = instance.data["families"]
+ _families_test = [instance.data["family"]] + instance.data["families"]
node = None
for x in instance:
@@ -63,7 +61,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
int(last_frame)
)
- if [fm for fm in families
+ if [fm for fm in _families_test
if fm in ["render", "prerender"]]:
if "representations" not in instance.data:
instance.data["representations"] = list()
@@ -91,9 +89,9 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
collected_frames_len))
# this will only run if slate frame is not already
# rendered from previews publishes
- if "slate" in instance.data["families"] \
+ if "slate" in _families_test \
and (frame_length == collected_frames_len) \
- and ("prerender" not in instance.data["families"]):
+ and ("prerender" not in _families_test):
frame_slate_str = "%0{}d".format(
len(str(last_frame))) % (first_frame - 1)
slate_frame = collected_frames[0].replace(
@@ -107,10 +105,17 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
self.log.debug("couldn't collect frames: {}".format(label))
# Add version data to instance
+ colorspace = node["colorspace"].value()
+
+ # remove default part of the string
+ if "default (" in colorspace:
+ colorspace = re.sub(r"default.\(|\)", "", colorspace)
+ self.log.debug("colorspace: `{}`".format(colorspace))
+
version_data = {
"families": [f.replace(".local", "").replace(".farm", "")
- for f in families if "write" not in f],
- "colorspace": node["colorspace"].value(),
+ for f in _families_test if "write" not in f],
+ "colorspace": colorspace
}
group_node = [x for x in instance if x.Class() == "Group"][0]
@@ -135,13 +140,12 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
"frameStartHandle": first_frame,
"frameEndHandle": last_frame,
"outputType": output_type,
- "families": families,
- "colorspace": node["colorspace"].value(),
+ "colorspace": colorspace,
"deadlineChunkSize": deadlineChunkSize,
"deadlinePriority": deadlinePriority
})
- if "prerender" in families:
+ if "prerender" in _families_test:
instance.data.update({
"family": "prerender",
"families": []
@@ -166,6 +170,4 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
"filename": api.get_representation_path(repre_doc)
}]
- self.log.debug("families: {}".format(families))
-
self.log.debug("instance.data: {}".format(instance.data))
diff --git a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py
index 21afc5313b..8b71aff1ac 100644
--- a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py
+++ b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py
@@ -5,23 +5,50 @@ import clique
@pyblish.api.log
-class RepairCollectionAction(pyblish.api.Action):
- label = "Repair"
+class RepairActionBase(pyblish.api.Action):
on = "failed"
icon = "wrench"
+ @staticmethod
+ def get_instance(context, plugin):
+ # Get the errored instances
+ failed = []
+ for result in context.data["results"]:
+ if (result["error"] is not None and result["instance"] is not None
+ and result["instance"] not in failed):
+ failed.append(result["instance"])
+
+ # Apply pyblish.logic to get the instances for the plug-in
+ return pyblish.api.instances_by_plugin(failed, plugin)
+
+ def repair_knob(self, instances, state):
+ for instance in instances:
+ files_remove = [os.path.join(instance.data["outputDir"], f)
+ for r in instance.data.get("representations", [])
+ for f in r.get("files", [])
+ ]
+ self.log.info("Files to be removed: {}".format(files_remove))
+ for f in files_remove:
+ os.remove(f)
+ self.log.debug("removing file: {}".format(f))
+ instance[0]["render"].setValue(state)
+ self.log.info("Rendering toggled to `{}`".format(state))
+
+
+class RepairCollectionActionToLocal(RepairActionBase):
+ label = "Repair > rerender with `Local` machine"
+
def process(self, context, plugin):
- self.log.info(context[0][0])
- files_remove = [os.path.join(context[0].data["outputDir"], f)
- for r in context[0].data.get("representations", [])
- for f in r.get("files", [])
- ]
- self.log.info("Files to be removed: {}".format(files_remove))
- for f in files_remove:
- os.remove(f)
- self.log.debug("removing file: {}".format(f))
- context[0][0]["render"].setValue(True)
- self.log.info("Rendering toggled ON")
+ instances = self.get_instance(context, plugin)
+ self.repair_knob(instances, "Local")
+
+
+class RepairCollectionActionToFarm(RepairActionBase):
+ label = "Repair > rerender `On farm` with remote machines"
+
+ def process(self, context, plugin):
+ instances = self.get_instance(context, plugin)
+ self.repair_knob(instances, "On farm")
class ValidateRenderedFrames(pyblish.api.InstancePlugin):
@@ -32,26 +59,28 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
label = "Validate rendered frame"
hosts = ["nuke", "nukestudio"]
- actions = [RepairCollectionAction]
+ actions = [RepairCollectionActionToLocal, RepairCollectionActionToFarm]
+
def process(self, instance):
- for repre in instance.data.get('representations'):
+ for repre in instance.data["representations"]:
- if not repre.get('files'):
+ if not repre.get("files"):
msg = ("no frames were collected, "
"you need to render them")
self.log.error(msg)
raise ValidationException(msg)
collections, remainder = clique.assemble(repre["files"])
- self.log.info('collections: {}'.format(str(collections)))
- self.log.info('remainder: {}'.format(str(remainder)))
+ self.log.info("collections: {}".format(str(collections)))
+ self.log.info("remainder: {}".format(str(remainder)))
collection = collections[0]
frame_length = int(
- instance.data["frameEndHandle"] - instance.data["frameStartHandle"] + 1
+ instance.data["frameEndHandle"]
+ - instance.data["frameStartHandle"] + 1
)
if frame_length != 1:
@@ -65,15 +94,10 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
self.log.error(msg)
raise ValidationException(msg)
- # if len(remainder) != 0:
- # msg = "There are some extra files in folder"
- # self.log.error(msg)
- # raise ValidationException(msg)
-
collected_frames_len = int(len(collection.indexes))
- self.log.info('frame_length: {}'.format(frame_length))
+ self.log.info("frame_length: {}".format(frame_length))
self.log.info(
- 'len(collection.indexes): {}'.format(collected_frames_len)
+ "len(collection.indexes): {}".format(collected_frames_len)
)
if ("slate" in instance.data["families"]) \
@@ -84,6 +108,6 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
"{} missing frames. Use repair to render all frames"
).format(__name__)
- instance.data['collection'] = collection
+ instance.data["collection"] = collection
return
diff --git a/openpype/hosts/tvpaint/api/lib.py b/openpype/hosts/tvpaint/api/lib.py
index cbc86f7b03..539cebe646 100644
--- a/openpype/hosts/tvpaint/api/lib.py
+++ b/openpype/hosts/tvpaint/api/lib.py
@@ -77,8 +77,9 @@ def set_context_settings(asset_doc=None):
handle_start = handles
handle_end = handles
- frame_start -= int(handle_start)
- frame_end += int(handle_end)
+ # Always start from 0 Mark In and set only Mark Out
+ mark_in = 0
+ mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end
- execute_george("tv_markin {} set".format(frame_start - 1))
- execute_george("tv_markout {} set".format(frame_end - 1))
+ execute_george("tv_markin {} set".format(mark_in))
+ execute_george("tv_markout {} set".format(mark_out))
diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_instance_frames.py b/openpype/hosts/tvpaint/plugins/publish/collect_instance_frames.py
new file mode 100644
index 0000000000..f291c363b8
--- /dev/null
+++ b/openpype/hosts/tvpaint/plugins/publish/collect_instance_frames.py
@@ -0,0 +1,37 @@
+import pyblish.api
+
+
+class CollectOutputFrameRange(pyblish.api.ContextPlugin):
+ """Collect frame start/end from context.
+
+ When instances are collected context does not contain `frameStart` and
+ `frameEnd` keys yet. They are collected in global plugin
+ `CollectAvalonEntities`.
+ """
+ label = "Collect output frame range"
+ order = pyblish.api.CollectorOrder
+ hosts = ["tvpaint"]
+
+ def process(self, context):
+ for instance in context:
+ frame_start = instance.data.get("frameStart")
+ frame_end = instance.data.get("frameEnd")
+ if frame_start is not None and frame_end is not None:
+ self.log.debug(
+ "Instance {} already has set frames {}-{}".format(
+ str(instance), frame_start, frame_end
+ )
+ )
+ return
+
+ frame_start = context.data.get("frameStart")
+ frame_end = context.data.get("frameEnd")
+
+ instance.data["frameStart"] = frame_start
+ instance.data["frameEnd"] = frame_end
+
+ self.log.info(
+ "Set frames {}-{} on instance {} ".format(
+ frame_start, frame_end, str(instance)
+ )
+ )
diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py
index 0808dc06b1..27bd8e9ede 100644
--- a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py
+++ b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py
@@ -78,8 +78,13 @@ class CollectInstances(pyblish.api.ContextPlugin):
if instance is None:
continue
- instance.data["frameStart"] = context.data["sceneMarkIn"] + 1
- instance.data["frameEnd"] = context.data["sceneMarkOut"] + 1
+ any_visible = False
+ for layer in instance.data["layers"]:
+ if layer["visible"]:
+ any_visible = True
+ break
+
+ instance.data["publish"] = any_visible
self.log.debug("Created instance: {}\n{}".format(
instance, json.dumps(instance.data, indent=4)
@@ -108,7 +113,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
group_id = instance_data["group_id"]
group_layers = []
for layer in layers_data:
- if layer["group_id"] == group_id and layer["visible"]:
+ if layer["group_id"] == group_id:
group_layers.append(layer)
if not group_layers:
diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py b/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py
index 4409413ff6..13c6c9eb78 100644
--- a/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py
+++ b/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py
@@ -57,7 +57,10 @@ class CollectWorkfileData(pyblish.api.ContextPlugin):
# Collect context from workfile metadata
self.log.info("Collecting workfile context")
+
workfile_context = pipeline.get_current_workfile_context()
+ # Store workfile context to pyblish context
+ context.data["workfile_context"] = workfile_context
if workfile_context:
# Change current context with context from workfile
key_map = (
@@ -67,16 +70,27 @@ class CollectWorkfileData(pyblish.api.ContextPlugin):
for env_key, key in key_map:
avalon.api.Session[env_key] = workfile_context[key]
os.environ[env_key] = workfile_context[key]
+ self.log.info("Context changed to: {}".format(workfile_context))
+
+ asset_name = workfile_context["asset"]
+ task_name = workfile_context["task"]
+
else:
+ asset_name = current_context["asset"]
+ task_name = current_context["task"]
# Handle older workfiles or workfiles without metadata
- self.log.warning(
+ self.log.warning((
"Workfile does not contain information about context."
" Using current Session context."
- )
- workfile_context = current_context.copy()
+ ))
- context.data["workfile_context"] = workfile_context
- self.log.info("Context changed to: {}".format(workfile_context))
+ # Store context asset name
+ context.data["asset"] = asset_name
+ self.log.info(
+ "Context is set to Asset: \"{}\" and Task: \"{}\"".format(
+ asset_name, task_name
+ )
+ )
# Collect instances
self.log.info("Collecting instance data from workfile")
diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py
index 0d125a1a50..007b5c41f1 100644
--- a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py
+++ b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py
@@ -1,8 +1,6 @@
import os
import shutil
-import time
import tempfile
-import multiprocessing
import pyblish.api
from avalon.tvpaint import lib
@@ -45,10 +43,64 @@ class ExtractSequence(pyblish.api.Extractor):
)
family_lowered = instance.data["family"].lower()
- frame_start = instance.data["frameStart"]
- frame_end = instance.data["frameEnd"]
+ mark_in = instance.context.data["sceneMarkIn"]
+ mark_out = instance.context.data["sceneMarkOut"]
+ # Frame start/end may be stored as float
+ frame_start = int(instance.data["frameStart"])
+ frame_end = int(instance.data["frameEnd"])
- filename_template = self._get_filename_template(frame_end)
+ # Handles are not stored per instance but on Context
+ handle_start = instance.context.data["handleStart"]
+ handle_end = instance.context.data["handleEnd"]
+
+ # --- Fallbacks ----------------------------------------------------
+ # This is required if validations of ranges are ignored.
+ # - all of this code won't change processing if range to render
+ # match to range of expected output
+
+ # Prepare output frames
+ output_frame_start = frame_start - handle_start
+ output_frame_end = frame_end + handle_end
+
+ # Change output frame start to 0 if handles cause it's negative number
+ if output_frame_start < 0:
+ self.log.warning((
+ "Frame start with handles has negative value."
+ " Changed to \"0\". Frames start: {}, Handle Start: {}"
+ ).format(frame_start, handle_start))
+ output_frame_start = 0
+
+ # Check Marks range and output range
+ output_range = output_frame_end - output_frame_start
+ marks_range = mark_out - mark_in
+
+ # Lower Mark Out if mark range is bigger than output
+ # - do not rendered not used frames
+ if output_range < marks_range:
+ new_mark_out = mark_out - (marks_range - output_range)
+ self.log.warning((
+ "Lowering render range to {} frames. Changed Mark Out {} -> {}"
+ ).format(marks_range + 1, mark_out, new_mark_out))
+ # Assign new mark out to variable
+ mark_out = new_mark_out
+
+ # Lower output frame end so representation has right `frameEnd` value
+ elif output_range > marks_range:
+ new_output_frame_end = (
+ output_frame_end - (output_range - marks_range)
+ )
+ self.log.warning((
+ "Lowering representation range to {} frames."
+ " Changed frame end {} -> {}"
+ ).format(output_range + 1, mark_out, new_mark_out))
+ output_frame_end = new_output_frame_end
+
+ # -------------------------------------------------------------------
+
+ filename_template = self._get_filename_template(
+ # Use the biggest number
+ max(mark_out, frame_end)
+ )
ext = os.path.splitext(filename_template)[1].replace(".", "")
self.log.debug("Using file template \"{}\"".format(filename_template))
@@ -57,7 +109,9 @@ class ExtractSequence(pyblish.api.Extractor):
output_dir = instance.data.get("stagingDir")
if not output_dir:
# Create temp folder if staging dir is not set
- output_dir = tempfile.mkdtemp().replace("\\", "/")
+ output_dir = (
+ tempfile.mkdtemp(prefix="tvpaint_render_")
+ ).replace("\\", "/")
instance.data["stagingDir"] = output_dir
self.log.debug(
@@ -65,23 +119,36 @@ class ExtractSequence(pyblish.api.Extractor):
)
if instance.data["family"] == "review":
- repre_files, thumbnail_fullpath = self.render_review(
- filename_template, output_dir, frame_start, frame_end
+ output_filenames, thumbnail_fullpath = self.render_review(
+ filename_template, output_dir, mark_in, mark_out
)
else:
# Render output
- repre_files, thumbnail_fullpath = self.render(
- filename_template, output_dir, frame_start, frame_end,
+ output_filenames, thumbnail_fullpath = self.render(
+ filename_template, output_dir,
+ mark_in, mark_out,
filtered_layers
)
+ # Sequence of one frame
+ if not output_filenames:
+ self.log.warning("Extractor did not create any output.")
+ return
+
+ repre_files = self._rename_output_files(
+ filename_template, output_dir,
+ mark_in, mark_out,
+ output_frame_start, output_frame_end
+ )
+
# Fill tags and new families
tags = []
if family_lowered in ("review", "renderlayer"):
tags.append("review")
# Sequence of one frame
- if len(repre_files) == 1:
+ single_file = len(repre_files) == 1
+ if single_file:
repre_files = repre_files[0]
new_repre = {
@@ -89,10 +156,13 @@ class ExtractSequence(pyblish.api.Extractor):
"ext": ext,
"files": repre_files,
"stagingDir": output_dir,
- "frameStart": frame_start,
- "frameEnd": frame_end,
"tags": tags
}
+
+ if not single_file:
+ new_repre["frameStart"] = output_frame_start
+ new_repre["frameEnd"] = output_frame_end
+
self.log.debug("Creating new representation: {}".format(new_repre))
instance.data["representations"].append(new_repre)
@@ -133,9 +203,45 @@ class ExtractSequence(pyblish.api.Extractor):
return "{{frame:0>{}}}".format(frame_padding) + ".png"
- def render_review(
- self, filename_template, output_dir, frame_start, frame_end
+ def _rename_output_files(
+ self, filename_template, output_dir,
+ mark_in, mark_out, output_frame_start, output_frame_end
):
+ # Use differnet ranges based on Mark In and output Frame Start values
+ # - this is to make sure that filename renaming won't affect files that
+ # are not renamed yet
+ mark_start_is_less = bool(mark_in < output_frame_start)
+ if mark_start_is_less:
+ marks_range = range(mark_out, mark_in - 1, -1)
+ frames_range = range(output_frame_end, output_frame_start - 1, -1)
+ else:
+ # This is less possible situation as frame start will be in most
+ # cases higher than Mark In.
+ marks_range = range(mark_in, mark_out + 1)
+ frames_range = range(output_frame_start, output_frame_end + 1)
+
+ repre_filepaths = []
+ for mark, frame in zip(marks_range, frames_range):
+ new_filename = filename_template.format(frame=frame)
+ new_filepath = os.path.join(output_dir, new_filename)
+
+ repre_filepaths.append(new_filepath)
+
+ if mark != frame:
+ old_filename = filename_template.format(frame=mark)
+ old_filepath = os.path.join(output_dir, old_filename)
+ os.rename(old_filepath, new_filepath)
+
+ # Reverse repre files order if output
+ if mark_start_is_less:
+ repre_filepaths = list(reversed(repre_filepaths))
+
+ return [
+ os.path.basename(path)
+ for path in repre_filepaths
+ ]
+
+ def render_review(self, filename_template, output_dir, mark_in, mark_out):
""" Export images from TVPaint using `tv_savesequence` command.
Args:
@@ -144,8 +250,8 @@ class ExtractSequence(pyblish.api.Extractor):
keyword argument `{frame}` or index argument (for same value).
Extension in template must match `save_mode`.
output_dir (str): Directory where files will be stored.
- first_frame (int): Starting frame from which export will begin.
- last_frame (int): On which frame export will end.
+ mark_in (int): Starting frame index from which export will begin.
+ mark_out (int): On which frame index export will end.
Retruns:
tuple: With 2 items first is list of filenames second is path to
@@ -154,10 +260,8 @@ class ExtractSequence(pyblish.api.Extractor):
self.log.debug("Preparing data for rendering.")
first_frame_filepath = os.path.join(
output_dir,
- filename_template.format(frame=frame_start)
+ filename_template.format(frame=mark_in)
)
- mark_in = frame_start - 1
- mark_out = frame_end - 1
george_script_lines = [
"tv_SaveMode \"PNG\"",
@@ -170,13 +274,22 @@ class ExtractSequence(pyblish.api.Extractor):
]
lib.execute_george_through_file("\n".join(george_script_lines))
- output = []
first_frame_filepath = None
- for frame in range(frame_start, frame_end + 1):
+ output_filenames = []
+ for frame in range(mark_in, mark_out + 1):
filename = filename_template.format(frame=frame)
- output.append(filename)
+ output_filenames.append(filename)
+
+ filepath = os.path.join(output_dir, filename)
+ if not os.path.exists(filepath):
+ raise AssertionError(
+ "Output was not rendered. File was not found {}".format(
+ filepath
+ )
+ )
+
if first_frame_filepath is None:
- first_frame_filepath = os.path.join(output_dir, filename)
+ first_frame_filepath = filepath
thumbnail_filepath = os.path.join(output_dir, "thumbnail.jpg")
if first_frame_filepath and os.path.exists(first_frame_filepath):
@@ -184,11 +297,10 @@ class ExtractSequence(pyblish.api.Extractor):
thumbnail_obj = Image.new("RGB", source_img.size, (255, 255, 255))
thumbnail_obj.paste(source_img)
thumbnail_obj.save(thumbnail_filepath)
- return output, thumbnail_filepath
- def render(
- self, filename_template, output_dir, frame_start, frame_end, layers
- ):
+ return output_filenames, thumbnail_filepath
+
+ def render(self, filename_template, output_dir, mark_in, mark_out, layers):
""" Export images from TVPaint.
Args:
@@ -197,8 +309,8 @@ class ExtractSequence(pyblish.api.Extractor):
keyword argument `{frame}` or index argument (for same value).
Extension in template must match `save_mode`.
output_dir (str): Directory where files will be stored.
- first_frame (int): Starting frame from which export will begin.
- last_frame (int): On which frame export will end.
+ mark_in (int): Starting frame index from which export will begin.
+ mark_out (int): On which frame index export will end.
layers (list): List of layers to be exported.
Retruns:
@@ -219,14 +331,11 @@ class ExtractSequence(pyblish.api.Extractor):
# Sort layer positions in reverse order
sorted_positions = list(reversed(sorted(layers_by_position.keys())))
if not sorted_positions:
- return
+ return [], None
self.log.debug("Collecting pre/post behavior of individual layers.")
behavior_by_layer_id = lib.get_layers_pre_post_behavior(layer_ids)
- mark_in_index = frame_start - 1
- mark_out_index = frame_end - 1
-
tmp_filename_template = "pos_{pos}." + filename_template
files_by_position = {}
@@ -239,25 +348,47 @@ class ExtractSequence(pyblish.api.Extractor):
tmp_filename_template,
output_dir,
behavior,
- mark_in_index,
- mark_out_index
+ mark_in,
+ mark_out
)
- files_by_position[position] = files_by_frames
+ if files_by_frames:
+ files_by_position[position] = files_by_frames
+ else:
+ self.log.warning((
+ "Skipped layer \"{}\". Probably out of Mark In/Out range."
+ ).format(layer["name"]))
+
+ if not files_by_position:
+ layer_names = set(layer["name"] for layer in layers)
+ joined_names = ", ".join(
+ ["\"{}\"".format(name) for name in layer_names]
+ )
+ self.log.warning(
+ "Layers {} do not have content in range {} - {}".format(
+ joined_names, mark_in, mark_out
+ )
+ )
+ return [], None
output_filepaths = self._composite_files(
files_by_position,
- mark_in_index,
- mark_out_index,
+ mark_in,
+ mark_out,
filename_template,
output_dir
)
self._cleanup_tmp_files(files_by_position)
- thumbnail_src_filepath = None
- thumbnail_filepath = None
- if output_filepaths:
- thumbnail_src_filepath = tuple(sorted(output_filepaths))[0]
+ output_filenames = [
+ os.path.basename(filepath)
+ for filepath in output_filepaths
+ ]
+ thumbnail_src_filepath = None
+ if output_filepaths:
+ thumbnail_src_filepath = output_filepaths[0]
+
+ thumbnail_filepath = None
if thumbnail_src_filepath and os.path.exists(thumbnail_src_filepath):
source_img = Image.open(thumbnail_src_filepath)
thumbnail_filepath = os.path.join(output_dir, "thumbnail.jpg")
@@ -265,11 +396,7 @@ class ExtractSequence(pyblish.api.Extractor):
thumbnail_obj.paste(source_img)
thumbnail_obj.save(thumbnail_filepath)
- repre_files = [
- os.path.basename(path)
- for path in output_filepaths
- ]
- return repre_files, thumbnail_filepath
+ return output_filenames, thumbnail_filepath
def _render_layer(
self,
@@ -283,6 +410,22 @@ class ExtractSequence(pyblish.api.Extractor):
layer_id = layer["layer_id"]
frame_start_index = layer["frame_start"]
frame_end_index = layer["frame_end"]
+
+ pre_behavior = behavior["pre"]
+ post_behavior = behavior["post"]
+
+ # Check if layer is before mark in
+ if frame_end_index < mark_in_index:
+ # Skip layer if post behavior is "none"
+ if post_behavior == "none":
+ return {}
+
+ # Check if layer is after mark out
+ elif frame_start_index > mark_out_index:
+ # Skip layer if pre behavior is "none"
+ if pre_behavior == "none":
+ return {}
+
exposure_frames = lib.get_exposure_frames(
layer_id, frame_start_index, frame_end_index
)
@@ -341,8 +484,6 @@ class ExtractSequence(pyblish.api.Extractor):
self.log.debug("Filled frames {}".format(str(_debug_filled_frames)))
# Fill frames by pre/post behavior of layer
- pre_behavior = behavior["pre"]
- post_behavior = behavior["post"]
self.log.debug((
"Completing image sequence of layer by pre/post behavior."
" PRE: {} | POST: {}"
@@ -530,17 +671,12 @@ class ExtractSequence(pyblish.api.Extractor):
filepath = position_data[frame_idx]
images_by_frame[frame_idx].append(filepath)
- process_count = os.cpu_count()
- if process_count > 1:
- process_count -= 1
-
- processes = {}
output_filepaths = []
missing_frame_paths = []
random_frame_path = None
for frame_idx in sorted(images_by_frame.keys()):
image_filepaths = images_by_frame[frame_idx]
- output_filename = filename_template.format(frame=frame_idx + 1)
+ output_filename = filename_template.format(frame=frame_idx)
output_filepath = os.path.join(output_dir, output_filename)
output_filepaths.append(output_filepath)
@@ -553,45 +689,15 @@ class ExtractSequence(pyblish.api.Extractor):
if len(image_filepaths) == 1:
os.rename(image_filepaths[0], output_filepath)
- # Prepare process for compositing of images
+ # Composite images
else:
- processes[frame_idx] = multiprocessing.Process(
- target=composite_images,
- args=(image_filepaths, output_filepath)
- )
+ composite_images(image_filepaths, output_filepath)
# Store path of random output image that will 100% exist after all
# multiprocessing as mockup for missing frames
if random_frame_path is None:
random_frame_path = output_filepath
- self.log.info(
- "Running {} compositing processes - this mey take a while.".format(
- len(processes)
- )
- )
- # Wait until all compositing processes are done
- running_processes = {}
- while True:
- for idx in tuple(running_processes.keys()):
- process = running_processes[idx]
- if not process.is_alive():
- running_processes.pop(idx).join()
-
- if processes and len(running_processes) != process_count:
- indexes = list(processes.keys())
- for _ in range(process_count - len(running_processes)):
- if not indexes:
- break
- idx = indexes.pop(0)
- running_processes[idx] = processes.pop(idx)
- running_processes[idx].start()
-
- if not running_processes and not processes:
- break
-
- time.sleep(0.01)
-
self.log.debug(
"Creating transparent images for frames without render {}.".format(
str(missing_frame_paths)
diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py b/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py
new file mode 100644
index 0000000000..4ce8d5347d
--- /dev/null
+++ b/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py
@@ -0,0 +1,55 @@
+import pyblish.api
+from avalon.tvpaint import pipeline
+
+
+class FixAssetNames(pyblish.api.Action):
+ """Repair the asset names.
+
+ Change instanace metadata in the workfile.
+ """
+
+ label = "Repair"
+ icon = "wrench"
+ on = "failed"
+
+ def process(self, context, plugin):
+ context_asset_name = context.data["asset"]
+ old_instance_items = pipeline.list_instances()
+ new_instance_items = []
+ for instance_item in old_instance_items:
+ instance_asset_name = instance_item.get("asset")
+ if (
+ instance_asset_name
+ and instance_asset_name != context_asset_name
+ ):
+ instance_item["asset"] = context_asset_name
+ new_instance_items.append(instance_item)
+ pipeline._write_instances(new_instance_items)
+
+
+class ValidateMissingLayers(pyblish.api.ContextPlugin):
+ """Validate assset name present on instance.
+
+ Asset name on instance should be the same as context's.
+ """
+
+ label = "Validate Asset Names"
+ order = pyblish.api.ValidatorOrder
+ hosts = ["tvpaint"]
+ actions = [FixAssetNames]
+
+ def process(self, context):
+ context_asset_name = context.data["asset"]
+ for instance in context:
+ asset_name = instance.data.get("asset")
+ if asset_name and asset_name == context_asset_name:
+ continue
+
+ instance_label = (
+ instance.data.get("label") or instance.data["name"]
+ )
+ raise AssertionError((
+ "Different asset name on instance then context's."
+ " Instance \"{}\" has asset name: \"{}\""
+ " Context asset name is: \"{}\""
+ ).format(instance_label, asset_name, context_asset_name))
diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_marks.py b/openpype/hosts/tvpaint/plugins/publish/validate_marks.py
index 73486d1005..e2ef81e4a4 100644
--- a/openpype/hosts/tvpaint/plugins/publish/validate_marks.py
+++ b/openpype/hosts/tvpaint/plugins/publish/validate_marks.py
@@ -14,37 +14,54 @@ class ValidateMarksRepair(pyblish.api.Action):
def process(self, context, plugin):
expected_data = ValidateMarks.get_expected_data(context)
- expected_data["markIn"] -= 1
- expected_data["markOut"] -= 1
-
- lib.execute_george("tv_markin {} set".format(expected_data["markIn"]))
+ lib.execute_george(
+ "tv_markin {} set".format(expected_data["markIn"])
+ )
lib.execute_george(
"tv_markout {} set".format(expected_data["markOut"])
)
class ValidateMarks(pyblish.api.ContextPlugin):
- """Validate mark in and out are enabled."""
+ """Validate mark in and out are enabled and it's duration.
- label = "Validate Marks"
+ Mark In/Out does not have to match frameStart and frameEnd but duration is
+ important.
+ """
+
+ label = "Validate Mark In/Out"
order = pyblish.api.ValidatorOrder
optional = True
actions = [ValidateMarksRepair]
@staticmethod
def get_expected_data(context):
+ scene_mark_in = context.data["sceneMarkIn"]
+
+ # Data collected in `CollectAvalonEntities`
+ frame_end = context.data["frameEnd"]
+ frame_start = context.data["frameStart"]
+ handle_start = context.data["handleStart"]
+ handle_end = context.data["handleEnd"]
+
+ # Calculate expeted Mark out (Mark In + duration - 1)
+ expected_mark_out = (
+ scene_mark_in
+ + (frame_end - frame_start)
+ + handle_start + handle_end
+ )
return {
- "markIn": int(context.data["frameStart"]),
+ "markIn": scene_mark_in,
"markInState": True,
- "markOut": int(context.data["frameEnd"]),
+ "markOut": expected_mark_out,
"markOutState": True
}
def process(self, context):
current_data = {
- "markIn": context.data["sceneMarkIn"] + 1,
+ "markIn": context.data["sceneMarkIn"],
"markInState": context.data["sceneMarkInState"],
- "markOut": context.data["sceneMarkOut"] + 1,
+ "markOut": context.data["sceneMarkOut"],
"markOutState": context.data["sceneMarkOutState"]
}
expected_data = self.get_expected_data(context)
diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_workfile_project_name.py b/openpype/hosts/tvpaint/plugins/publish/validate_workfile_project_name.py
index 7c1032fcad..cc664d8030 100644
--- a/openpype/hosts/tvpaint/plugins/publish/validate_workfile_project_name.py
+++ b/openpype/hosts/tvpaint/plugins/publish/validate_workfile_project_name.py
@@ -13,7 +13,15 @@ class ValidateWorkfileProjectName(pyblish.api.ContextPlugin):
order = pyblish.api.ValidatorOrder
def process(self, context):
- workfile_context = context.data["workfile_context"]
+ workfile_context = context.data.get("workfile_context")
+ # If workfile context is missing than project is matching to
+ # `AVALON_PROJECT` value for 100%
+ if not workfile_context:
+ self.log.info(
+ "Workfile context (\"workfile_context\") is not filled."
+ )
+ return
+
workfile_project_name = workfile_context["project"]
env_project_name = os.environ["AVALON_PROJECT"]
if workfile_project_name == env_project_name:
diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py
index c698be63de..f084cccfc3 100644
--- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py
+++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py
@@ -24,7 +24,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
workdir = self.launch_context.env["AVALON_WORKDIR"]
- engine_version = self.app_name.split("_")[-1].replace("-", ".")
+ engine_version = self.app_name.split("/")[-1].replace("-", ".")
unreal_project_name = f"{asset_name}_{task_name}"
# Unreal is sensitive about project names longer then 20 chars
diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py b/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py
new file mode 100644
index 0000000000..a9279bf6e0
--- /dev/null
+++ b/openpype/hosts/unreal/plugins/load/load_alembic_geometrycache.py
@@ -0,0 +1,162 @@
+import os
+
+from avalon import api, pipeline
+from avalon.unreal import lib
+from avalon.unreal import pipeline as unreal_pipeline
+import unreal
+
+
+class PointCacheAlembicLoader(api.Loader):
+ """Load Point Cache from Alembic"""
+
+ families = ["model", "pointcache"]
+ label = "Import Alembic Point Cache"
+ representations = ["abc"]
+ icon = "cube"
+ color = "orange"
+
+ def load(self, context, name, namespace, data):
+ """
+ Load and containerise representation into Content Browser.
+
+ This is two step process. First, import FBX to temporary path and
+ then call `containerise()` on it - this moves all content to new
+ directory and then it will create AssetContainer there and imprint it
+ with metadata. This will mark this path as container.
+
+ Args:
+ context (dict): application context
+ name (str): subset name
+ namespace (str): in Unreal this is basically path to container.
+ This is not passed here, so namespace is set
+ by `containerise()` because only then we know
+ real path.
+ data (dict): Those would be data to be imprinted. This is not used
+ now, data are imprinted by `containerise()`.
+
+ Returns:
+ list(str): list of container content
+ """
+
+ # Create directory for asset and avalon container
+ root = "/Game/Avalon/Assets"
+ asset = context.get('asset').get('name')
+ suffix = "_CON"
+ if asset:
+ asset_name = "{}_{}".format(asset, name)
+ else:
+ asset_name = "{}".format(name)
+
+ tools = unreal.AssetToolsHelpers().get_asset_tools()
+ asset_dir, container_name = tools.create_unique_asset_name(
+ "{}/{}/{}".format(root, asset, name), suffix="")
+
+ container_name += suffix
+
+ unreal.EditorAssetLibrary.make_directory(asset_dir)
+
+ task = unreal.AssetImportTask()
+
+ task.set_editor_property('filename', self.fname)
+ task.set_editor_property('destination_path', asset_dir)
+ task.set_editor_property('destination_name', asset_name)
+ task.set_editor_property('replace_existing', False)
+ task.set_editor_property('automated', True)
+ task.set_editor_property('save', True)
+
+ # set import options here
+ # Unreal 4.24 ignores the settings. It works with Unreal 4.26
+ options = unreal.AbcImportSettings()
+ options.set_editor_property(
+ 'import_type', unreal.AlembicImportType.GEOMETRY_CACHE)
+
+ options.geometry_cache_settings.set_editor_property(
+ 'flatten_tracks', False)
+
+ task.options = options
+ unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
+
+ # Create Asset Container
+ lib.create_avalon_container(
+ container=container_name, path=asset_dir)
+
+ data = {
+ "schema": "openpype:container-2.0",
+ "id": pipeline.AVALON_CONTAINER_ID,
+ "asset": asset,
+ "namespace": asset_dir,
+ "container_name": container_name,
+ "asset_name": asset_name,
+ "loader": str(self.__class__.__name__),
+ "representation": context["representation"]["_id"],
+ "parent": context["representation"]["parent"],
+ "family": context["representation"]["context"]["family"]
+ }
+ unreal_pipeline.imprint(
+ "{}/{}".format(asset_dir, container_name), data)
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ asset_dir, recursive=True, include_folder=True
+ )
+
+ for a in asset_content:
+ unreal.EditorAssetLibrary.save_asset(a)
+
+ return asset_content
+
+ def update(self, container, representation):
+ name = container["asset_name"]
+ source_path = api.get_representation_path(representation)
+ destination_path = container["namespace"]
+
+ task = unreal.AssetImportTask()
+
+ task.set_editor_property('filename', source_path)
+ task.set_editor_property('destination_path', destination_path)
+ # strip suffix
+ task.set_editor_property('destination_name', name)
+ task.set_editor_property('replace_existing', True)
+ task.set_editor_property('automated', True)
+ task.set_editor_property('save', True)
+
+ # set import options here
+ # Unreal 4.24 ignores the settings. It works with Unreal 4.26
+ options = unreal.AbcImportSettings()
+ options.set_editor_property(
+ 'import_type', unreal.AlembicImportType.GEOMETRY_CACHE)
+
+ options.geometry_cache_settings.set_editor_property(
+ 'flatten_tracks', False)
+
+ task.options = options
+ # do import fbx and replace existing data
+ unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
+ container_path = "{}/{}".format(container["namespace"],
+ container["objectName"])
+ # update metadata
+ unreal_pipeline.imprint(
+ container_path,
+ {
+ "representation": str(representation["_id"]),
+ "parent": str(representation["parent"])
+ })
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ destination_path, recursive=True, include_folder=True
+ )
+
+ for a in asset_content:
+ unreal.EditorAssetLibrary.save_asset(a)
+
+ def remove(self, container):
+ path = container["namespace"]
+ parent_path = os.path.dirname(path)
+
+ unreal.EditorAssetLibrary.delete_directory(path)
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ parent_path, recursive=False
+ )
+
+ if len(asset_content) == 0:
+ unreal.EditorAssetLibrary.delete_directory(parent_path)
diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py b/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py
new file mode 100644
index 0000000000..b652af0b89
--- /dev/null
+++ b/openpype/hosts/unreal/plugins/load/load_alembic_skeletalmesh.py
@@ -0,0 +1,156 @@
+import os
+
+from avalon import api, pipeline
+from avalon.unreal import lib
+from avalon.unreal import pipeline as unreal_pipeline
+import unreal
+
+
+class SkeletalMeshAlembicLoader(api.Loader):
+ """Load Unreal SkeletalMesh from Alembic"""
+
+ families = ["pointcache"]
+ label = "Import Alembic Skeletal Mesh"
+ representations = ["abc"]
+ icon = "cube"
+ color = "orange"
+
+ def load(self, context, name, namespace, data):
+ """
+ Load and containerise representation into Content Browser.
+
+ This is two step process. First, import FBX to temporary path and
+ then call `containerise()` on it - this moves all content to new
+ directory and then it will create AssetContainer there and imprint it
+ with metadata. This will mark this path as container.
+
+ Args:
+ context (dict): application context
+ name (str): subset name
+ namespace (str): in Unreal this is basically path to container.
+ This is not passed here, so namespace is set
+ by `containerise()` because only then we know
+ real path.
+ data (dict): Those would be data to be imprinted. This is not used
+ now, data are imprinted by `containerise()`.
+
+ Returns:
+ list(str): list of container content
+ """
+
+ # Create directory for asset and avalon container
+ root = "/Game/Avalon/Assets"
+ asset = context.get('asset').get('name')
+ suffix = "_CON"
+ if asset:
+ asset_name = "{}_{}".format(asset, name)
+ else:
+ asset_name = "{}".format(name)
+
+ tools = unreal.AssetToolsHelpers().get_asset_tools()
+ asset_dir, container_name = tools.create_unique_asset_name(
+ "{}/{}/{}".format(root, asset, name), suffix="")
+
+ container_name += suffix
+
+ unreal.EditorAssetLibrary.make_directory(asset_dir)
+
+ task = unreal.AssetImportTask()
+
+ task.set_editor_property('filename', self.fname)
+ task.set_editor_property('destination_path', asset_dir)
+ task.set_editor_property('destination_name', asset_name)
+ task.set_editor_property('replace_existing', False)
+ task.set_editor_property('automated', True)
+ task.set_editor_property('save', True)
+
+ # set import options here
+ # Unreal 4.24 ignores the settings. It works with Unreal 4.26
+ options = unreal.AbcImportSettings()
+ options.set_editor_property(
+ 'import_type', unreal.AlembicImportType.SKELETAL)
+
+ task.options = options
+ unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
+
+ # Create Asset Container
+ lib.create_avalon_container(
+ container=container_name, path=asset_dir)
+
+ data = {
+ "schema": "openpype:container-2.0",
+ "id": pipeline.AVALON_CONTAINER_ID,
+ "asset": asset,
+ "namespace": asset_dir,
+ "container_name": container_name,
+ "asset_name": asset_name,
+ "loader": str(self.__class__.__name__),
+ "representation": context["representation"]["_id"],
+ "parent": context["representation"]["parent"],
+ "family": context["representation"]["context"]["family"]
+ }
+ unreal_pipeline.imprint(
+ "{}/{}".format(asset_dir, container_name), data)
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ asset_dir, recursive=True, include_folder=True
+ )
+
+ for a in asset_content:
+ unreal.EditorAssetLibrary.save_asset(a)
+
+ return asset_content
+
+ def update(self, container, representation):
+ name = container["asset_name"]
+ source_path = api.get_representation_path(representation)
+ destination_path = container["namespace"]
+
+ task = unreal.AssetImportTask()
+
+ task.set_editor_property('filename', source_path)
+ task.set_editor_property('destination_path', destination_path)
+ # strip suffix
+ task.set_editor_property('destination_name', name)
+ task.set_editor_property('replace_existing', True)
+ task.set_editor_property('automated', True)
+ task.set_editor_property('save', True)
+
+ # set import options here
+ # Unreal 4.24 ignores the settings. It works with Unreal 4.26
+ options = unreal.AbcImportSettings()
+ options.set_editor_property(
+ 'import_type', unreal.AlembicImportType.SKELETAL)
+
+ task.options = options
+ # do import fbx and replace existing data
+ unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
+ container_path = "{}/{}".format(container["namespace"],
+ container["objectName"])
+ # update metadata
+ unreal_pipeline.imprint(
+ container_path,
+ {
+ "representation": str(representation["_id"]),
+ "parent": str(representation["parent"])
+ })
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ destination_path, recursive=True, include_folder=True
+ )
+
+ for a in asset_content:
+ unreal.EditorAssetLibrary.save_asset(a)
+
+ def remove(self, container):
+ path = container["namespace"]
+ parent_path = os.path.dirname(path)
+
+ unreal.EditorAssetLibrary.delete_directory(path)
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ parent_path, recursive=False
+ )
+
+ if len(asset_content) == 0:
+ unreal.EditorAssetLibrary.delete_directory(parent_path)
diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py b/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py
new file mode 100644
index 0000000000..12b9320f72
--- /dev/null
+++ b/openpype/hosts/unreal/plugins/load/load_alembic_staticmesh.py
@@ -0,0 +1,156 @@
+import os
+
+from avalon import api, pipeline
+from avalon.unreal import lib
+from avalon.unreal import pipeline as unreal_pipeline
+import unreal
+
+
+class StaticMeshAlembicLoader(api.Loader):
+ """Load Unreal StaticMesh from Alembic"""
+
+ families = ["model"]
+ label = "Import Alembic Static Mesh"
+ representations = ["abc"]
+ icon = "cube"
+ color = "orange"
+
+ def load(self, context, name, namespace, data):
+ """
+ Load and containerise representation into Content Browser.
+
+ This is two step process. First, import FBX to temporary path and
+ then call `containerise()` on it - this moves all content to new
+ directory and then it will create AssetContainer there and imprint it
+ with metadata. This will mark this path as container.
+
+ Args:
+ context (dict): application context
+ name (str): subset name
+ namespace (str): in Unreal this is basically path to container.
+ This is not passed here, so namespace is set
+ by `containerise()` because only then we know
+ real path.
+ data (dict): Those would be data to be imprinted. This is not used
+ now, data are imprinted by `containerise()`.
+
+ Returns:
+ list(str): list of container content
+ """
+
+ # Create directory for asset and avalon container
+ root = "/Game/Avalon/Assets"
+ asset = context.get('asset').get('name')
+ suffix = "_CON"
+ if asset:
+ asset_name = "{}_{}".format(asset, name)
+ else:
+ asset_name = "{}".format(name)
+
+ tools = unreal.AssetToolsHelpers().get_asset_tools()
+ asset_dir, container_name = tools.create_unique_asset_name(
+ "{}/{}/{}".format(root, asset, name), suffix="")
+
+ container_name += suffix
+
+ unreal.EditorAssetLibrary.make_directory(asset_dir)
+
+ task = unreal.AssetImportTask()
+
+ task.set_editor_property('filename', self.fname)
+ task.set_editor_property('destination_path', asset_dir)
+ task.set_editor_property('destination_name', asset_name)
+ task.set_editor_property('replace_existing', False)
+ task.set_editor_property('automated', True)
+ task.set_editor_property('save', True)
+
+ # set import options here
+ # Unreal 4.24 ignores the settings. It works with Unreal 4.26
+ options = unreal.AbcImportSettings()
+ options.set_editor_property(
+ 'import_type', unreal.AlembicImportType.STATIC_MESH)
+
+ task.options = options
+ unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
+
+ # Create Asset Container
+ lib.create_avalon_container(
+ container=container_name, path=asset_dir)
+
+ data = {
+ "schema": "openpype:container-2.0",
+ "id": pipeline.AVALON_CONTAINER_ID,
+ "asset": asset,
+ "namespace": asset_dir,
+ "container_name": container_name,
+ "asset_name": asset_name,
+ "loader": str(self.__class__.__name__),
+ "representation": context["representation"]["_id"],
+ "parent": context["representation"]["parent"],
+ "family": context["representation"]["context"]["family"]
+ }
+ unreal_pipeline.imprint(
+ "{}/{}".format(asset_dir, container_name), data)
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ asset_dir, recursive=True, include_folder=True
+ )
+
+ for a in asset_content:
+ unreal.EditorAssetLibrary.save_asset(a)
+
+ return asset_content
+
+ def update(self, container, representation):
+ name = container["asset_name"]
+ source_path = api.get_representation_path(representation)
+ destination_path = container["namespace"]
+
+ task = unreal.AssetImportTask()
+
+ task.set_editor_property('filename', source_path)
+ task.set_editor_property('destination_path', destination_path)
+ # strip suffix
+ task.set_editor_property('destination_name', name)
+ task.set_editor_property('replace_existing', True)
+ task.set_editor_property('automated', True)
+ task.set_editor_property('save', True)
+
+ # set import options here
+ # Unreal 4.24 ignores the settings. It works with Unreal 4.26
+ options = unreal.AbcImportSettings()
+ options.set_editor_property(
+ 'import_type', unreal.AlembicImportType.STATIC_MESH)
+
+ task.options = options
+ # do import fbx and replace existing data
+ unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
+ container_path = "{}/{}".format(container["namespace"],
+ container["objectName"])
+ # update metadata
+ unreal_pipeline.imprint(
+ container_path,
+ {
+ "representation": str(representation["_id"]),
+ "parent": str(representation["parent"])
+ })
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ destination_path, recursive=True, include_folder=True
+ )
+
+ for a in asset_content:
+ unreal.EditorAssetLibrary.save_asset(a)
+
+ def remove(self, container):
+ path = container["namespace"]
+ parent_path = os.path.dirname(path)
+
+ unreal.EditorAssetLibrary.delete_directory(path)
+
+ asset_content = unreal.EditorAssetLibrary.list_assets(
+ parent_path, recursive=False
+ )
+
+ if len(asset_content) == 0:
+ unreal.EditorAssetLibrary.delete_directory(parent_path)
diff --git a/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py b/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py
index dbea1d5951..dcb566fa4c 100644
--- a/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py
+++ b/openpype/hosts/unreal/plugins/load/load_staticmeshfbx.py
@@ -1,7 +1,6 @@
import os
from avalon import api, pipeline
-from avalon import unreal as avalon_unreal
from avalon.unreal import lib
from avalon.unreal import pipeline as unreal_pipeline
import unreal
diff --git a/openpype/launcher_actions.py b/openpype/launcher_actions.py
deleted file mode 100644
index cf68dfb5c1..0000000000
--- a/openpype/launcher_actions.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import os
-import sys
-
-from avalon import api, pipeline
-
-PACKAGE_DIR = os.path.dirname(__file__)
-PLUGINS_DIR = os.path.join(PACKAGE_DIR, "plugins", "launcher")
-ACTIONS_DIR = os.path.join(PLUGINS_DIR, "actions")
-
-
-def register_launcher_actions():
- """Register specific actions which should be accessible in the launcher"""
-
- actions = []
- ext = ".py"
- sys.path.append(ACTIONS_DIR)
-
- for f in os.listdir(ACTIONS_DIR):
- file, extention = os.path.splitext(f)
- if ext in extention:
- module = __import__(file)
- klass = getattr(module, file)
- actions.append(klass)
-
- if actions is []:
- return
-
- for action in actions:
- print("Using launcher action from config @ '{}'".format(action.name))
- pipeline.register_plugin(api.Action, action)
diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py
index 2c1c70e663..1df89dbb21 100644
--- a/openpype/lib/__init__.py
+++ b/openpype/lib/__init__.py
@@ -6,11 +6,21 @@ import sys
import os
import site
-# add Python version specific vendor folder
-site.addsitedir(
- os.path.join(
- os.getenv("OPENPYPE_REPOS_ROOT", ""),
- "vendor", "python", "python_{}".format(sys.version[0])))
+# Add Python version specific vendor folder
+python_version_dir = os.path.join(
+ os.getenv("OPENPYPE_REPOS_ROOT", ""),
+ "openpype", "vendor", "python", "python_{}".format(sys.version[0])
+)
+# Prepend path in sys paths
+sys.path.insert(0, python_version_dir)
+site.addsitedir(python_version_dir)
+
+
+from .env_tools import (
+ env_value_to_bool,
+ get_paths_from_environ,
+ get_global_environments
+)
from .terminal import Terminal
from .execute import (
@@ -33,10 +43,11 @@ from .anatomy import (
from .config import get_datetime_data
-from .env_tools import (
- env_value_to_bool,
- get_paths_from_environ,
- get_global_environments
+from .vendor_bin_utils import (
+ get_vendor_bin_path,
+ get_oiio_tools_path,
+ get_ffmpeg_tool_path,
+ ffprobe_streams
)
from .python_module_tools import (
@@ -68,6 +79,16 @@ from .avalon_context import (
change_timer_to_current_context
)
+from .local_settings import (
+ IniSettingRegistry,
+ JSONSettingRegistry,
+ OpenPypeSecureRegistry,
+ OpenPypeSettingsRegistry,
+ get_local_site_id,
+ change_openpype_mongo_url,
+ get_openpype_username
+)
+
from .applications import (
ApplicationLaunchFailed,
ApplictionExecutableNotFound,
@@ -92,6 +113,7 @@ from .plugin_tools import (
TaskNotSetError,
get_subset_name,
filter_pyblish_plugins,
+ set_plugin_attributes_from_settings,
source_hash,
get_unique_layer_name,
get_background_layers,
@@ -101,26 +123,12 @@ from .plugin_tools import (
should_decompress
)
-from .local_settings import (
- IniSettingRegistry,
- JSONSettingRegistry,
- OpenPypeSecureRegistry,
- OpenPypeSettingsRegistry,
- get_local_site_id,
- change_openpype_mongo_url
-)
-
from .path_tools import (
version_up,
get_version_from_path,
get_last_version_from_path
)
-from .ffmpeg_utils import (
- get_ffmpeg_tool_path,
- ffprobe_streams
-)
-
from .editorial import (
is_overlapping_otio_ranges,
otio_range_to_frame_range,
@@ -143,6 +151,11 @@ __all__ = [
"get_paths_from_environ",
"get_global_environments",
+ "get_vendor_bin_path",
+ "get_oiio_tools_path",
+ "get_ffmpeg_tool_path",
+ "ffprobe_streams",
+
"modules_from_path",
"recursive_bases_from_class",
"classes_from_module",
@@ -168,6 +181,14 @@ __all__ = [
"change_timer_to_current_context",
+ "IniSettingRegistry",
+ "JSONSettingRegistry",
+ "OpenPypeSecureRegistry",
+ "OpenPypeSettingsRegistry",
+ "get_local_site_id",
+ "change_openpype_mongo_url",
+ "get_openpype_username",
+
"ApplicationLaunchFailed",
"ApplictionExecutableNotFound",
"ApplicationNotFound",
@@ -187,6 +208,7 @@ __all__ = [
"TaskNotSetError",
"get_subset_name",
"filter_pyblish_plugins",
+ "set_plugin_attributes_from_settings",
"source_hash",
"get_unique_layer_name",
"get_background_layers",
@@ -199,9 +221,6 @@ __all__ = [
"get_version_from_path",
"get_last_version_from_path",
- "ffprobe_streams",
- "get_ffmpeg_tool_path",
-
"terminal",
"merge_dict",
@@ -216,13 +235,6 @@ __all__ = [
"validate_mongo_connection",
"OpenPypeMongoConnection",
- "IniSettingRegistry",
- "JSONSettingRegistry",
- "OpenPypeSecureRegistry",
- "OpenPypeSettingsRegistry",
- "get_local_site_id",
- "change_openpype_mongo_url",
-
"timeit",
"is_overlapping_otio_ranges",
diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py
index 51c646d494..c5c192f51b 100644
--- a/openpype/lib/applications.py
+++ b/openpype/lib/applications.py
@@ -25,6 +25,7 @@ from . import (
PypeLogger,
Anatomy
)
+from .local_settings import get_openpype_username
from .avalon_context import (
get_workdir_data,
get_workdir_with_workdir_data
@@ -179,6 +180,7 @@ class Application:
if group.enabled:
enabled = data.get("enabled", True)
self.enabled = enabled
+ self.use_python_2 = data["use_python_2"]
self.label = data.get("variant_label") or name
self.full_name = "/".join((group.name, name))
@@ -261,14 +263,32 @@ class Application:
class ApplicationManager:
- def __init__(self):
- self.log = PypeLogger().get_logger(self.__class__.__name__)
+ """Load applications and tools and store them by their full name.
+
+ Args:
+ system_settings (dict): Preloaded system settings. When passed manager
+ will always use these values. Gives ability to create manager
+ using different settings.
+ """
+ def __init__(self, system_settings=None):
+ self.log = PypeLogger.get_logger(self.__class__.__name__)
self.app_groups = {}
self.applications = {}
self.tool_groups = {}
self.tools = {}
+ self._system_settings = system_settings
+
+ self.refresh()
+
+ def set_system_settings(self, system_settings):
+ """Ability to change init system settings.
+
+ This will trigger refresh of manager.
+ """
+ self._system_settings = system_settings
+
self.refresh()
def refresh(self):
@@ -278,9 +298,12 @@ class ApplicationManager:
self.tool_groups.clear()
self.tools.clear()
- settings = get_system_settings(
- clear_metadata=False, exclude_locals=False
- )
+ if self._system_settings is not None:
+ settings = copy.deepcopy(self._system_settings)
+ else:
+ settings = get_system_settings(
+ clear_metadata=False, exclude_locals=False
+ )
app_defs = settings["applications"]
for group_name, variant_defs in app_defs.items():
@@ -1224,7 +1247,7 @@ def _prepare_last_workfile(data, workdir):
file_template = anatomy.templates["work"]["file"]
workdir_data.update({
"version": 1,
- "user": os.environ.get("OPENPYPE_USERNAME") or getpass.getuser(),
+ "user": get_openpype_username(),
"ext": extensions[0]
})
diff --git a/openpype/lib/env_tools.py b/openpype/lib/env_tools.py
index 025c13a322..ede14e00b2 100644
--- a/openpype/lib/env_tools.py
+++ b/openpype/lib/env_tools.py
@@ -1,5 +1,4 @@
import os
-from openpype.settings import get_environments
def env_value_to_bool(env_key=None, value=None, default=False):
@@ -89,6 +88,7 @@ def get_global_environments(env=None):
"""
import acre
from openpype.modules import ModulesManager
+ from openpype.settings import get_environments
if env is None:
env = {}
diff --git a/openpype/lib/local_settings.py b/openpype/lib/local_settings.py
index 56bdd047c9..67845c77cf 100644
--- a/openpype/lib/local_settings.py
+++ b/openpype/lib/local_settings.py
@@ -1,9 +1,11 @@
# -*- coding: utf-8 -*-
"""Package to deal with saving and retrieving user specific settings."""
import os
+import json
+import getpass
+import platform
from datetime import datetime
from abc import ABCMeta, abstractmethod
-import json
# TODO Use pype igniter logic instead of using duplicated code
# disable lru cache in Python 2
@@ -24,11 +26,11 @@ try:
except ImportError:
import ConfigParser as configparser
-import platform
-
import six
import appdirs
+from openpype.settings import get_local_settings
+
from .import validate_mongo_connection
_PLACEHOLDER = object()
@@ -538,3 +540,25 @@ def change_openpype_mongo_url(new_mongo_url):
if existing_value is not None:
registry.delete_item(key)
registry.set_item(key, new_mongo_url)
+
+
+def get_openpype_username():
+ """OpenPype username used for templates and publishing.
+
+ May be different than machine's username.
+
+ Always returns "OPENPYPE_USERNAME" environment if is set then tries local
+ settings and last option is to use `getpass.getuser()` which returns
+ machine username.
+ """
+ username = os.environ.get("OPENPYPE_USERNAME")
+ if not username:
+ local_settings = get_local_settings()
+ username = (
+ local_settings
+ .get("general", {})
+ .get("username")
+ )
+ if not username:
+ username = getpass.getuser()
+ return username
diff --git a/openpype/lib/log.py b/openpype/lib/log.py
index 9745279e28..39b6c67080 100644
--- a/openpype/lib/log.py
+++ b/openpype/lib/log.py
@@ -123,6 +123,8 @@ class PypeFormatter(logging.Formatter):
if record.exc_info is not None:
line_len = len(str(record.exc_info[1]))
+ if line_len > 30:
+ line_len = 30
out = "{}\n{}\n{}\n{}\n{}".format(
out,
line_len * "=",
diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py
index eb024383d3..44c688456e 100644
--- a/openpype/lib/plugin_tools.py
+++ b/openpype/lib/plugin_tools.py
@@ -9,6 +9,7 @@ import tempfile
from .execute import run_subprocess
from .profiles_filtering import filter_profiles
+from .vendor_bin_utils import get_oiio_tools_path
from openpype.settings import get_project_settings
@@ -127,7 +128,7 @@ def filter_pyblish_plugins(plugins):
plugin_kind = file.split(os.path.sep)[-2:-1][0]
# TODO: change after all plugins are moved one level up
- if host_from_file == "pype":
+ if host_from_file == "openpype":
host_from_file = "global"
try:
@@ -149,6 +150,95 @@ def filter_pyblish_plugins(plugins):
setattr(plugin, option, value)
+def set_plugin_attributes_from_settings(
+ plugins, superclass, host_name=None, project_name=None
+):
+ """Change attribute values on Avalon plugins by project settings.
+
+ This function should be used only in host context. Modify
+ behavior of plugins.
+
+ Args:
+ plugins (list): Plugins discovered by origin avalon discover method.
+ superclass (object): Superclass of plugin type (e.g. Cretor, Loader).
+ host_name (str): Name of host for which plugins are loaded and from.
+ Value from environment `AVALON_APP` is used if not entered.
+ project_name (str): Name of project for which settings will be loaded.
+ Value from environment `AVALON_PROJECT` is used if not entered.
+ """
+
+ # determine host application to use for finding presets
+ if host_name is None:
+ host_name = os.environ.get("AVALON_APP")
+
+ if project_name is None:
+ project_name = os.environ.get("AVALON_PROJECT")
+
+ # map plugin superclass to preset json. Currenly suppoted is load and
+ # create (avalon.api.Loader and avalon.api.Creator)
+ plugin_type = None
+ if superclass.__name__.split(".")[-1] == "Loader":
+ plugin_type = "load"
+ elif superclass.__name__.split(".")[-1] == "Creator":
+ plugin_type = "create"
+
+ if not host_name or not project_name or plugin_type is None:
+ msg = "Skipped attributes override from settings."
+ if not host_name:
+ msg += " Host name is not defined."
+
+ if not project_name:
+ msg += " Project name is not defined."
+
+ if plugin_type is None:
+ msg += " Plugin type is unsupported for class {}.".format(
+ superclass.__name__
+ )
+
+ print(msg)
+ return
+
+ print(">>> Finding presets for {}:{} ...".format(host_name, plugin_type))
+
+ project_settings = get_project_settings(project_name)
+ plugin_type_settings = (
+ project_settings
+ .get(host_name, {})
+ .get(plugin_type, {})
+ )
+ global_type_settings = (
+ project_settings
+ .get("global", {})
+ .get(plugin_type, {})
+ )
+ if not global_type_settings and not plugin_type_settings:
+ return
+
+ for plugin in plugins:
+ plugin_name = plugin.__name__
+
+ plugin_settings = None
+ # Look for plugin settings in host specific settings
+ if plugin_name in plugin_type_settings:
+ plugin_settings = plugin_type_settings[plugin_name]
+
+ # Look for plugin settings in global settings
+ elif plugin_name in global_type_settings:
+ plugin_settings = global_type_settings[plugin_name]
+
+ if not plugin_settings:
+ continue
+
+ print(">>> We have preset for {}".format(plugin_name))
+ for option, value in plugin_settings.items():
+ if option == "enabled" and value is False:
+ setattr(plugin, "active", False)
+ print(" - is disabled by preset")
+ else:
+ setattr(plugin, option, value)
+ print(" - setting `{}`: `{}`".format(option, value))
+
+
def source_hash(filepath, *args):
"""Generate simple identifier for a source file.
This is used to identify whether a source file has previously been
@@ -235,7 +325,7 @@ def oiio_supported():
Returns:
(bool)
"""
- oiio_path = os.getenv("OPENPYPE_OIIO_PATH", "")
+ oiio_path = get_oiio_tools_path()
if not oiio_path or not os.path.exists(oiio_path):
log.debug("OIIOTool is not configured or not present at {}".
format(oiio_path))
@@ -269,7 +359,7 @@ def decompress(target_dir, file_url,
(int(input_frame_end) > int(input_frame_start))
oiio_cmd = []
- oiio_cmd.append(os.getenv("OPENPYPE_OIIO_PATH"))
+ oiio_cmd.append(get_oiio_tools_path())
oiio_cmd.append("--compression none")
@@ -328,7 +418,7 @@ def should_decompress(file_url):
"""
if oiio_supported():
output = run_subprocess([
- os.getenv("OPENPYPE_OIIO_PATH"),
+ get_oiio_tools_path(),
"--info", "-v", file_url])
return "compression: \"dwaa\"" in output or \
"compression: \"dwab\"" in output
diff --git a/openpype/lib/ffmpeg_utils.py b/openpype/lib/vendor_bin_utils.py
similarity index 50%
rename from openpype/lib/ffmpeg_utils.py
rename to openpype/lib/vendor_bin_utils.py
index ba9f24c5d7..3b923cb608 100644
--- a/openpype/lib/ffmpeg_utils.py
+++ b/openpype/lib/vendor_bin_utils.py
@@ -1,33 +1,60 @@
import os
import logging
import json
+import platform
import subprocess
-from . import get_paths_from_environ
-
log = logging.getLogger("FFmpeg utils")
-def get_ffmpeg_tool_path(tool="ffmpeg"):
- """Find path to ffmpeg tool in FFMPEG_PATH paths.
+def get_vendor_bin_path(bin_app):
+ """Path to OpenPype vendorized binaries.
- Function looks for tool in paths set in FFMPEG_PATH environment. If tool
- exists then returns it's full path.
+ Vendorized executables are expected in specific hierarchy inside build or
+ in code source.
+
+ "{OPENPYPE_ROOT}/vendor/bin/{name of vendorized app}/{platform}"
Args:
- tool (string): tool name
+ bin_app (str): Name of vendorized application.
Returns:
- (str): tool name itself when tool path was not found. (FFmpeg path
- may be set in PATH environment variable)
+ str: Path to vendorized binaries folder.
"""
- dir_paths = get_paths_from_environ("FFMPEG_PATH")
- for dir_path in dir_paths:
- for file_name in os.listdir(dir_path):
- base, _ext = os.path.splitext(file_name)
- if base.lower() == tool.lower():
- return os.path.join(dir_path, tool)
- return tool
+ return os.path.join(
+ os.environ["OPENPYPE_ROOT"],
+ "vendor",
+ "bin",
+ bin_app,
+ platform.system().lower()
+ )
+
+
+def get_oiio_tools_path(tool="oiiotool"):
+ """Path to vendorized OpenImageIO tool executables.
+
+ Args:
+ tool (string): Tool name (oiiotool, maketx, ...).
+ Default is "oiiotool".
+ """
+ oiio_dir = get_vendor_bin_path("oiio")
+ return os.path.join(oiio_dir, tool)
+
+
+def get_ffmpeg_tool_path(tool="ffmpeg"):
+ """Path to vendorized FFmpeg executable.
+
+ Args:
+ tool (string): Tool name (ffmpeg, ffprobe, ...).
+ Default is "ffmpeg".
+
+ Returns:
+ str: Full path to ffmpeg executable.
+ """
+ ffmpeg_dir = get_vendor_bin_path("ffmpeg")
+ if platform.system().lower() == "windows":
+ ffmpeg_dir = os.path.join(ffmpeg_dir, "bin")
+ return os.path.join(ffmpeg_dir, tool)
def ffprobe_streams(path_to_file, logger=None):
diff --git a/openpype/modules/__init__.py b/openpype/modules/__init__.py
index d7c6d99fe6..bae48c540b 100644
--- a/openpype/modules/__init__.py
+++ b/openpype/modules/__init__.py
@@ -18,10 +18,6 @@ from .webserver import (
WebServerModule,
IWebServerRoutes
)
-from .user import (
- UserModule,
- IUserModule
-)
from .idle_manager import (
IdleManager,
IIdleManager
@@ -60,9 +56,6 @@ __all__ = (
"WebServerModule",
"IWebServerRoutes",
- "UserModule",
- "IUserModule",
-
"IdleManager",
"IIdleManager",
diff --git a/openpype/modules/clockify/clockify_api.py b/openpype/modules/clockify/clockify_api.py
index 29de5de0c9..3f0a9799b4 100644
--- a/openpype/modules/clockify/clockify_api.py
+++ b/openpype/modules/clockify/clockify_api.py
@@ -34,7 +34,12 @@ class ClockifyAPI:
self.request_counter = 0
self.request_time = time.time()
- self.secure_registry = OpenPypeSecureRegistry("clockify")
+ self._secure_registry = None
+
+ def secure_registry(self):
+ if self._secure_registry is None:
+ self._secure_registry = OpenPypeSecureRegistry("clockify")
+ return self._secure_registry
@property
def headers(self):
diff --git a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py
index 38a6b9b246..69159fda1a 100644
--- a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py
@@ -64,7 +64,6 @@ class AfterEffectsSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline
"AVALON_ASSET",
"AVALON_TASK",
"AVALON_APP_NAME",
- "OPENPYPE_USERNAME",
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
]
diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py
index ba1ffdcf30..37041a84b1 100644
--- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py
@@ -273,7 +273,6 @@ class HarmonySubmitDeadline(
"AVALON_ASSET",
"AVALON_TASK",
"AVALON_APP_NAME",
- "OPENPYPE_USERNAME",
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
]
diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
index 3aea837bb1..a5841f406c 100644
--- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
@@ -47,7 +47,7 @@ payload_skeleton_template = {
"BatchName": None, # Top-level group name
"Name": None, # Job name, as seen in Monitor
"UserName": None,
- "Plugin": "MayaPype",
+ "Plugin": "MayaBatch",
"Frames": "{start}-{end}x{step}",
"Comment": None,
"Priority": 50,
@@ -396,7 +396,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
step=int(self._instance.data["byFrameStep"]))
self.payload_skeleton["JobInfo"]["Plugin"] = self._instance.data.get(
- "mayaRenderPlugin", "MayaPype")
+ "mayaRenderPlugin", "MayaBatch")
self.payload_skeleton["JobInfo"]["BatchName"] = filename
# Job name, as seen in Monitor
@@ -441,7 +441,6 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
"AVALON_ASSET",
"AVALON_TASK",
"AVALON_APP_NAME",
- "OPENPYPE_USERNAME",
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
]
diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py
index 2e30e624ef..7faa3393e5 100644
--- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py
@@ -31,6 +31,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
group = ""
department = ""
limit_groups = {}
+ use_gpu = False
def process(self, instance):
instance.data["toBeRenderedOn"] = "deadline"
@@ -206,6 +207,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
# Resolve relative references
"ProjectPath": script_path,
"AWSAssetFile0": render_path,
+
+ # using GPU by default
+ "UseGpu": self.use_gpu,
+
# Only the specific write node is rendered.
"WriteNode": exe_node_name
},
@@ -375,7 +380,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
list: captured groups list
"""
captured_groups = []
- for lg_name, list_node_class in self.deadline_limit_groups.items():
+ for lg_name, list_node_class in self.limit_groups.items():
for node_class in list_node_class:
for node in nuke.allNodes(recurseGroups=True):
# ignore all nodes not member of defined class
diff --git a/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py b/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py
index 8248bf532e..12d687bbf2 100644
--- a/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py
+++ b/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py
@@ -2,9 +2,9 @@ import json
from openpype.api import ProjectSettings
-from openpype.modules.ftrack.lib import ServerAction
-from openpype.modules.ftrack.lib.avalon_sync import (
- get_pype_attr,
+from openpype.modules.ftrack.lib import (
+ ServerAction,
+ get_openpype_attr,
CUST_ATTR_AUTO_SYNC
)
@@ -159,7 +159,7 @@ class PrepareProjectServer(ServerAction):
for key, entity in project_anatom_settings["attributes"].items():
attribute_values_by_key[key] = entity.value
- cust_attrs, hier_cust_attrs = get_pype_attr(self.session, True)
+ cust_attrs, hier_cust_attrs = get_openpype_attr(self.session, True)
for attr in hier_cust_attrs:
key = attr["key"]
diff --git a/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py b/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py
index 347b227dd3..3bb01798e4 100644
--- a/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py
+++ b/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py
@@ -18,12 +18,15 @@ from avalon import schema
from avalon.api import AvalonMongoDB
from openpype.modules.ftrack.lib import (
+ get_openpype_attr,
+ CUST_ATTR_ID_KEY,
+ CUST_ATTR_AUTO_SYNC,
+
avalon_sync,
+
BaseEvent
)
from openpype.modules.ftrack.lib.avalon_sync import (
- CUST_ATTR_ID_KEY,
- CUST_ATTR_AUTO_SYNC,
EntitySchemas
)
@@ -125,7 +128,7 @@ class SyncToAvalonEvent(BaseEvent):
@property
def avalon_cust_attrs(self):
if self._avalon_cust_attrs is None:
- self._avalon_cust_attrs = avalon_sync.get_pype_attr(
+ self._avalon_cust_attrs = get_openpype_attr(
self.process_session, query_keys=self.cust_attr_query_keys
)
return self._avalon_cust_attrs
diff --git a/openpype/modules/ftrack/event_handlers_user/action_clean_hierarchical_attributes.py b/openpype/modules/ftrack/event_handlers_user/action_clean_hierarchical_attributes.py
index c326c56a7c..45cc9adf55 100644
--- a/openpype/modules/ftrack/event_handlers_user/action_clean_hierarchical_attributes.py
+++ b/openpype/modules/ftrack/event_handlers_user/action_clean_hierarchical_attributes.py
@@ -1,7 +1,10 @@
import collections
import ftrack_api
-from openpype.modules.ftrack.lib import BaseAction, statics_icon
-from openpype.modules.ftrack.lib.avalon_sync import get_pype_attr
+from openpype.modules.ftrack.lib import (
+ BaseAction,
+ statics_icon,
+ get_openpype_attr
+)
class CleanHierarchicalAttrsAction(BaseAction):
@@ -52,7 +55,7 @@ class CleanHierarchicalAttrsAction(BaseAction):
)
entity_ids_joined = ", ".join(all_entities_ids)
- attrs, hier_attrs = get_pype_attr(session)
+ attrs, hier_attrs = get_openpype_attr(session)
for attr in hier_attrs:
configuration_key = attr["key"]
diff --git a/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py b/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py
index 63025d35b3..63605eda5e 100644
--- a/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py
+++ b/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py
@@ -2,10 +2,20 @@ import collections
import json
import arrow
import ftrack_api
-from openpype.modules.ftrack.lib import BaseAction, statics_icon
-from openpype.modules.ftrack.lib.avalon_sync import (
- CUST_ATTR_ID_KEY, CUST_ATTR_GROUP, default_custom_attributes_definition
+from openpype.modules.ftrack.lib import (
+ BaseAction,
+ statics_icon,
+
+ CUST_ATTR_ID_KEY,
+ CUST_ATTR_GROUP,
+ CUST_ATTR_TOOLS,
+ CUST_ATTR_APPLICATIONS,
+
+ default_custom_attributes_definition,
+ app_definitions_from_app_manager,
+ tool_definitions_from_app_manager
)
+
from openpype.api import get_system_settings
from openpype.lib import ApplicationManager
@@ -370,24 +380,12 @@ class CustomAttributes(BaseAction):
exc_info=True
)
- def app_defs_from_app_manager(self):
- app_definitions = []
- for app_name, app in self.app_manager.applications.items():
- if app.enabled and app.is_host:
- app_definitions.append({
- app_name: app.full_label
- })
-
- if not app_definitions:
- app_definitions.append({"empty": "< Empty >"})
- return app_definitions
-
def applications_attribute(self, event):
- apps_data = self.app_defs_from_app_manager()
+ apps_data = app_definitions_from_app_manager(self.app_manager)
applications_custom_attr_data = {
"label": "Applications",
- "key": "applications",
+ "key": CUST_ATTR_APPLICATIONS,
"type": "enumerator",
"entity_type": "show",
"group": CUST_ATTR_GROUP,
@@ -399,19 +397,11 @@ class CustomAttributes(BaseAction):
self.process_attr_data(applications_custom_attr_data, event)
def tools_attribute(self, event):
- tools_data = []
- for tool_name, tool in self.app_manager.tools.items():
- tools_data.append({
- tool_name: tool.label
- })
-
- # Make sure there is at least one item
- if not tools_data:
- tools_data.append({"empty": "< Empty >"})
+ tools_data = tool_definitions_from_app_manager(self.app_manager)
tools_custom_attr_data = {
"label": "Tools",
- "key": "tools_env",
+ "key": CUST_ATTR_TOOLS,
"type": "enumerator",
"is_hierarchical": True,
"group": CUST_ATTR_GROUP,
diff --git a/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py b/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py
index bd25f995fe..5298c06371 100644
--- a/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py
+++ b/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py
@@ -4,10 +4,8 @@ from openpype.api import ProjectSettings
from openpype.modules.ftrack.lib import (
BaseAction,
- statics_icon
-)
-from openpype.modules.ftrack.lib.avalon_sync import (
- get_pype_attr,
+ statics_icon,
+ get_openpype_attr,
CUST_ATTR_AUTO_SYNC
)
@@ -162,7 +160,7 @@ class PrepareProjectLocal(BaseAction):
for key, entity in project_anatom_settings["attributes"].items():
attribute_values_by_key[key] = entity.value
- cust_attrs, hier_cust_attrs = get_pype_attr(self.session, True)
+ cust_attrs, hier_cust_attrs = get_openpype_attr(self.session, True)
for attr in hier_cust_attrs:
key = attr["key"]
diff --git a/openpype/modules/ftrack/ftrack_module.py b/openpype/modules/ftrack/ftrack_module.py
index cd383cbdc6..af578de86b 100644
--- a/openpype/modules/ftrack/ftrack_module.py
+++ b/openpype/modules/ftrack/ftrack_module.py
@@ -1,4 +1,5 @@
import os
+import json
import collections
from abc import ABCMeta, abstractmethod
import six
@@ -8,10 +9,10 @@ from openpype.modules import (
ITrayModule,
IPluginPaths,
ITimersManager,
- IUserModule,
ILaunchHookPaths,
ISettingsChangeListener
)
+from openpype.settings import SaveWarningExc
FTRACK_MODULE_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -32,7 +33,6 @@ class FtrackModule(
ITrayModule,
IPluginPaths,
ITimersManager,
- IUserModule,
ILaunchHookPaths,
ISettingsChangeListener
):
@@ -42,7 +42,17 @@ class FtrackModule(
ftrack_settings = settings[self.name]
self.enabled = ftrack_settings["enabled"]
- self.ftrack_url = ftrack_settings["ftrack_server"].strip("/ ")
+ # Add http schema
+ ftrack_url = ftrack_settings["ftrack_server"].strip("/ ")
+ if ftrack_url:
+ if "http" not in ftrack_url:
+ ftrack_url = "https://" + ftrack_url
+
+ # Check if "ftrack.app" is part os url
+ if "ftrackapp.com" not in ftrack_url:
+ ftrack_url = ftrack_url + ".ftrackapp.com"
+
+ self.ftrack_url = ftrack_url
current_dir = os.path.dirname(os.path.abspath(__file__))
server_event_handlers_paths = [
@@ -113,15 +123,86 @@ class FtrackModule(
if self.tray_module:
self.tray_module.stop_timer_manager()
- def on_pype_user_change(self, username):
- """Implementation of IUserModule interface."""
- if self.tray_module:
- self.tray_module.changed_user()
-
- def on_system_settings_save(self, *_args, **_kwargs):
+ def on_system_settings_save(
+ self, old_value, new_value, changes, new_value_metadata
+ ):
"""Implementation of ISettingsChangeListener interface."""
- # Ignore
- return
+ try:
+ session = self.create_ftrack_session()
+ except Exception:
+ self.log.warning("Couldn't create ftrack session.", exc_info=True)
+ raise SaveWarningExc((
+ "Saving of attributes to ftrack wasn't successful,"
+ " try running Create/Update Avalon Attributes in ftrack."
+ ))
+
+ from .lib import (
+ get_openpype_attr,
+ CUST_ATTR_APPLICATIONS,
+ CUST_ATTR_TOOLS,
+ app_definitions_from_app_manager,
+ tool_definitions_from_app_manager
+ )
+ from openpype.api import ApplicationManager
+ query_keys = [
+ "id",
+ "key",
+ "config"
+ ]
+ custom_attributes = get_openpype_attr(
+ session,
+ split_hierarchical=False,
+ query_keys=query_keys
+ )
+ app_attribute = None
+ tool_attribute = None
+ for custom_attribute in custom_attributes:
+ key = custom_attribute["key"]
+ if key == CUST_ATTR_APPLICATIONS:
+ app_attribute = custom_attribute
+ elif key == CUST_ATTR_TOOLS:
+ tool_attribute = custom_attribute
+
+ app_manager = ApplicationManager(new_value_metadata)
+ missing_attributes = []
+ if not app_attribute:
+ missing_attributes.append(CUST_ATTR_APPLICATIONS)
+ else:
+ config = json.loads(app_attribute["config"])
+ new_data = app_definitions_from_app_manager(app_manager)
+ prepared_data = []
+ for item in new_data:
+ for key, label in item.items():
+ prepared_data.append({
+ "menu": label,
+ "value": key
+ })
+
+ config["data"] = json.dumps(prepared_data)
+ app_attribute["config"] = json.dumps(config)
+
+ if not tool_attribute:
+ missing_attributes.append(CUST_ATTR_TOOLS)
+ else:
+ config = json.loads(tool_attribute["config"])
+ new_data = tool_definitions_from_app_manager(app_manager)
+ prepared_data = []
+ for item in new_data:
+ for key, label in item.items():
+ prepared_data.append({
+ "menu": label,
+ "value": key
+ })
+ config["data"] = json.dumps(prepared_data)
+ tool_attribute["config"] = json.dumps(config)
+
+ session.commit()
+
+ if missing_attributes:
+ raise SaveWarningExc((
+ "Couldn't find custom attribute/s ({}) to update."
+ " Try running Create/Update Avalon Attributes in ftrack."
+ ).format(", ".join(missing_attributes)))
def on_project_settings_save(self, *_args, **_kwargs):
"""Implementation of ISettingsChangeListener interface."""
@@ -129,7 +210,7 @@ class FtrackModule(
return
def on_project_anatomy_save(
- self, old_value, new_value, changes, project_name
+ self, old_value, new_value, changes, project_name, new_value_metadata
):
"""Implementation of ISettingsChangeListener interface."""
if not project_name:
@@ -140,32 +221,49 @@ class FtrackModule(
return
import ftrack_api
- from openpype.modules.ftrack.lib import avalon_sync
+ from openpype.modules.ftrack.lib import get_openpype_attr
+
+ try:
+ session = self.create_ftrack_session()
+ except Exception:
+ self.log.warning("Couldn't create ftrack session.", exc_info=True)
+ raise SaveWarningExc((
+ "Saving of attributes to ftrack wasn't successful,"
+ " try running Create/Update Avalon Attributes in ftrack."
+ ))
- session = self.create_ftrack_session()
project_entity = session.query(
"Project where full_name is \"{}\"".format(project_name)
).first()
if not project_entity:
- self.log.warning((
- "Ftrack project with names \"{}\" was not found."
- " Skipping settings attributes change callback."
- ))
- return
+ msg = (
+ "Ftrack project with name \"{}\" was not found in Ftrack."
+ " Can't push attribute changes."
+ ).format(project_name)
+ self.log.warning(msg)
+ raise SaveWarningExc(msg)
project_id = project_entity["id"]
- cust_attr, hier_attr = avalon_sync.get_pype_attr(session)
+ cust_attr, hier_attr = get_openpype_attr(session)
cust_attr_by_key = {attr["key"]: attr for attr in cust_attr}
hier_attrs_by_key = {attr["key"]: attr for attr in hier_attr}
+
+ failed = {}
+ missing = {}
for key, value in attributes_changes.items():
configuration = hier_attrs_by_key.get(key)
if not configuration:
configuration = cust_attr_by_key.get(key)
if not configuration:
+ self.log.warning(
+ "Custom attribute \"{}\" was not found.".format(key)
+ )
+ missing[key] = value
continue
+ # TODO add add permissions check
# TODO add value validations
# - value type and list items
entity_key = collections.OrderedDict()
@@ -179,10 +277,45 @@ class FtrackModule(
"value",
ftrack_api.symbol.NOT_SET,
value
-
)
)
- session.commit()
+ try:
+ session.commit()
+ self.log.debug(
+ "Changed project custom attribute \"{}\" to \"{}\"".format(
+ key, value
+ )
+ )
+ except Exception:
+ self.log.warning(
+ "Failed to set \"{}\" to \"{}\"".format(key, value),
+ exc_info=True
+ )
+ session.rollback()
+ failed[key] = value
+
+ if not failed and not missing:
+ return
+
+ error_msg = (
+ "Values were not updated on Ftrack which may cause issues."
+ " try running Create/Update Avalon Attributes in ftrack "
+ " and resave project settings."
+ )
+ if missing:
+ error_msg += "\nMissing Custom attributes on Ftrack: {}.".format(
+ ", ".join([
+ '"{}"'.format(key)
+ for key in missing.keys()
+ ])
+ )
+ if failed:
+ joined_failed = ", ".join([
+ '"{}": "{}"'.format(key, value)
+ for key, value in failed.items()
+ ])
+ error_msg += "\nFailed to set: {}".format(joined_failed)
+ raise SaveWarningExc(error_msg)
def create_ftrack_session(self, **session_kwargs):
import ftrack_api
diff --git a/openpype/modules/ftrack/launch_hooks/pre_python2_vendor.py b/openpype/modules/ftrack/launch_hooks/pre_python2_vendor.py
index f14857bc98..d34b6533fb 100644
--- a/openpype/modules/ftrack/launch_hooks/pre_python2_vendor.py
+++ b/openpype/modules/ftrack/launch_hooks/pre_python2_vendor.py
@@ -8,10 +8,13 @@ class PrePython2Support(PreLaunchHook):
Path to vendor modules is added to the beggining of PYTHONPATH.
"""
- # There will be needed more granular filtering in future
- app_groups = ["maya", "nuke", "nukex", "hiero", "nukestudio", "unreal"]
def execute(self):
+ if not self.application.use_python_2:
+ return
+
+ self.log.info("Adding Ftrack Python 2 packages to PYTHONPATH.")
+
# Prepare vendor dir path
python_2_vendor = os.path.join(FTRACK_MODULE_DIR, "python2_vendor")
diff --git a/openpype/modules/ftrack/lib/__init__.py b/openpype/modules/ftrack/lib/__init__.py
index 82b6875590..ce6d5284b6 100644
--- a/openpype/modules/ftrack/lib/__init__.py
+++ b/openpype/modules/ftrack/lib/__init__.py
@@ -1,7 +1,21 @@
+from .constants import (
+ CUST_ATTR_ID_KEY,
+ CUST_ATTR_AUTO_SYNC,
+ CUST_ATTR_GROUP,
+ CUST_ATTR_TOOLS,
+ CUST_ATTR_APPLICATIONS
+)
from . settings import (
get_ftrack_url_from_settings,
get_ftrack_event_mongo_info
)
+from .custom_attributes import (
+ default_custom_attributes_definition,
+ app_definitions_from_app_manager,
+ tool_definitions_from_app_manager,
+ get_openpype_attr
+)
+
from . import avalon_sync
from . import credentials
from .ftrack_base_handler import BaseHandler
@@ -10,9 +24,20 @@ from .ftrack_action_handler import BaseAction, ServerAction, statics_icon
__all__ = (
+ "CUST_ATTR_ID_KEY",
+ "CUST_ATTR_AUTO_SYNC",
+ "CUST_ATTR_GROUP",
+ "CUST_ATTR_TOOLS",
+ "CUST_ATTR_APPLICATIONS",
+
"get_ftrack_url_from_settings",
"get_ftrack_event_mongo_info",
+ "default_custom_attributes_definition",
+ "app_definitions_from_app_manager",
+ "tool_definitions_from_app_manager",
+ "get_openpype_attr",
+
"avalon_sync",
"credentials",
diff --git a/openpype/modules/ftrack/lib/avalon_sync.py b/openpype/modules/ftrack/lib/avalon_sync.py
index 79e1366a0d..f58e858a5a 100644
--- a/openpype/modules/ftrack/lib/avalon_sync.py
+++ b/openpype/modules/ftrack/lib/avalon_sync.py
@@ -14,17 +14,21 @@ else:
from avalon.api import AvalonMongoDB
import avalon
+
from openpype.api import (
Logger,
Anatomy,
get_anatomy_settings
)
+from openpype.lib import ApplicationManager
+
+from .constants import CUST_ATTR_ID_KEY
+from .custom_attributes import get_openpype_attr
from bson.objectid import ObjectId
from bson.errors import InvalidId
from pymongo import UpdateOne
import ftrack_api
-from openpype.lib import ApplicationManager
log = Logger.get_logger(__name__)
@@ -36,23 +40,6 @@ EntitySchemas = {
"config": "openpype:config-2.0"
}
-# Group name of custom attributes
-CUST_ATTR_GROUP = "openpype"
-
-# name of Custom attribute that stores mongo_id from avalon db
-CUST_ATTR_ID_KEY = "avalon_mongo_id"
-CUST_ATTR_AUTO_SYNC = "avalon_auto_sync"
-
-
-def default_custom_attributes_definition():
- json_file_path = os.path.join(
- os.path.dirname(os.path.abspath(__file__)),
- "custom_attributes.json"
- )
- with open(json_file_path, "r") as json_stream:
- data = json.load(json_stream)
- return data
-
def check_regex(name, entity_type, in_schema=None, schema_patterns=None):
schema_name = "asset-3.0"
@@ -91,39 +78,6 @@ def join_query_keys(keys):
return ",".join(["\"{}\"".format(key) for key in keys])
-def get_pype_attr(session, split_hierarchical=True, query_keys=None):
- custom_attributes = []
- hier_custom_attributes = []
- if not query_keys:
- query_keys = [
- "id",
- "entity_type",
- "object_type_id",
- "is_hierarchical",
- "default"
- ]
- # TODO remove deprecated "pype" group from query
- cust_attrs_query = (
- "select {}"
- " from CustomAttributeConfiguration"
- # Kept `pype` for Backwards Compatiblity
- " where group.name in (\"pype\", \"{}\")"
- ).format(", ".join(query_keys), CUST_ATTR_GROUP)
- all_avalon_attr = session.query(cust_attrs_query).all()
- for cust_attr in all_avalon_attr:
- if split_hierarchical and cust_attr["is_hierarchical"]:
- hier_custom_attributes.append(cust_attr)
- continue
-
- custom_attributes.append(cust_attr)
-
- if split_hierarchical:
- # return tuple
- return custom_attributes, hier_custom_attributes
-
- return custom_attributes
-
-
def get_python_type_for_custom_attribute(cust_attr, cust_attr_type_name=None):
"""Python type that should value of custom attribute have.
@@ -921,7 +875,7 @@ class SyncEntitiesFactory:
def set_cutom_attributes(self):
self.log.debug("* Preparing custom attributes")
# Get custom attributes and values
- custom_attrs, hier_attrs = get_pype_attr(
+ custom_attrs, hier_attrs = get_openpype_attr(
self.session, query_keys=self.cust_attr_query_keys
)
ent_types = self.session.query("select id, name from ObjectType").all()
@@ -2508,7 +2462,7 @@ class SyncEntitiesFactory:
if new_entity_id not in p_chilren:
self.entities_dict[parent_id]["children"].append(new_entity_id)
- cust_attr, _ = get_pype_attr(self.session)
+ cust_attr, _ = get_openpype_attr(self.session)
for _attr in cust_attr:
key = _attr["key"]
if key not in av_entity["data"]:
diff --git a/openpype/modules/ftrack/lib/constants.py b/openpype/modules/ftrack/lib/constants.py
new file mode 100644
index 0000000000..73d5112e6d
--- /dev/null
+++ b/openpype/modules/ftrack/lib/constants.py
@@ -0,0 +1,12 @@
+# Group name of custom attributes
+CUST_ATTR_GROUP = "openpype"
+
+# name of Custom attribute that stores mongo_id from avalon db
+CUST_ATTR_ID_KEY = "avalon_mongo_id"
+# Auto sync of project
+CUST_ATTR_AUTO_SYNC = "avalon_auto_sync"
+
+# Applications custom attribute name
+CUST_ATTR_APPLICATIONS = "applications"
+# Environment tools custom attribute
+CUST_ATTR_TOOLS = "tools_env"
diff --git a/openpype/modules/ftrack/lib/credentials.py b/openpype/modules/ftrack/lib/credentials.py
index 2d719347e7..4e29e66382 100644
--- a/openpype/modules/ftrack/lib/credentials.py
+++ b/openpype/modules/ftrack/lib/credentials.py
@@ -15,7 +15,10 @@ API_KEY_KEY = "api_key"
def get_ftrack_hostname(ftrack_server=None):
if not ftrack_server:
- ftrack_server = os.environ["FTRACK_SERVER"]
+ ftrack_server = os.environ.get("FTRACK_SERVER")
+
+ if not ftrack_server:
+ return None
if "//" not in ftrack_server:
ftrack_server = "//" + ftrack_server
@@ -29,17 +32,24 @@ def _get_ftrack_secure_key(hostname, key):
def get_credentials(ftrack_server=None):
+ output = {
+ USERNAME_KEY: None,
+ API_KEY_KEY: None
+ }
hostname = get_ftrack_hostname(ftrack_server)
+ if not hostname:
+ return output
+
username_name = _get_ftrack_secure_key(hostname, USERNAME_KEY)
api_key_name = _get_ftrack_secure_key(hostname, API_KEY_KEY)
username_registry = OpenPypeSecureRegistry(username_name)
api_key_registry = OpenPypeSecureRegistry(api_key_name)
- return {
- USERNAME_KEY: username_registry.get_item(USERNAME_KEY, None),
- API_KEY_KEY: api_key_registry.get_item(API_KEY_KEY, None)
- }
+ output[USERNAME_KEY] = username_registry.get_item(USERNAME_KEY, None)
+ output[API_KEY_KEY] = api_key_registry.get_item(API_KEY_KEY, None)
+
+ return output
def save_credentials(username, api_key, ftrack_server=None):
@@ -77,9 +87,9 @@ def clear_credentials(ftrack_server=None):
def check_credentials(username, api_key, ftrack_server=None):
if not ftrack_server:
- ftrack_server = os.environ["FTRACK_SERVER"]
+ ftrack_server = os.environ.get("FTRACK_SERVER")
- if not username or not api_key:
+ if not ftrack_server or not username or not api_key:
return False
try:
diff --git a/openpype/modules/ftrack/lib/custom_attributes.py b/openpype/modules/ftrack/lib/custom_attributes.py
new file mode 100644
index 0000000000..33eea32baa
--- /dev/null
+++ b/openpype/modules/ftrack/lib/custom_attributes.py
@@ -0,0 +1,73 @@
+import os
+import json
+
+from .constants import CUST_ATTR_GROUP
+
+
+def default_custom_attributes_definition():
+ json_file_path = os.path.join(
+ os.path.dirname(os.path.abspath(__file__)),
+ "custom_attributes.json"
+ )
+ with open(json_file_path, "r") as json_stream:
+ data = json.load(json_stream)
+ return data
+
+
+def app_definitions_from_app_manager(app_manager):
+ app_definitions = []
+ for app_name, app in app_manager.applications.items():
+ if app.enabled and app.is_host:
+ app_definitions.append({
+ app_name: app.full_label
+ })
+
+ if not app_definitions:
+ app_definitions.append({"empty": "< Empty >"})
+ return app_definitions
+
+
+def tool_definitions_from_app_manager(app_manager):
+ tools_data = []
+ for tool_name, tool in app_manager.tools.items():
+ tools_data.append({
+ tool_name: tool.label
+ })
+
+ # Make sure there is at least one item
+ if not tools_data:
+ tools_data.append({"empty": "< Empty >"})
+ return tools_data
+
+
+def get_openpype_attr(session, split_hierarchical=True, query_keys=None):
+ custom_attributes = []
+ hier_custom_attributes = []
+ if not query_keys:
+ query_keys = [
+ "id",
+ "entity_type",
+ "object_type_id",
+ "is_hierarchical",
+ "default"
+ ]
+ # TODO remove deprecated "pype" group from query
+ cust_attrs_query = (
+ "select {}"
+ " from CustomAttributeConfiguration"
+ # Kept `pype` for Backwards Compatiblity
+ " where group.name in (\"pype\", \"{}\")"
+ ).format(", ".join(query_keys), CUST_ATTR_GROUP)
+ all_avalon_attr = session.query(cust_attrs_query).all()
+ for cust_attr in all_avalon_attr:
+ if split_hierarchical and cust_attr["is_hierarchical"]:
+ hier_custom_attributes.append(cust_attr)
+ continue
+
+ custom_attributes.append(cust_attr)
+
+ if split_hierarchical:
+ # return tuple
+ return custom_attributes, hier_custom_attributes
+
+ return custom_attributes
diff --git a/openpype/modules/ftrack/tray/ftrack_tray.py b/openpype/modules/ftrack/tray/ftrack_tray.py
index ee27d8b730..34e4646767 100644
--- a/openpype/modules/ftrack/tray/ftrack_tray.py
+++ b/openpype/modules/ftrack/tray/ftrack_tray.py
@@ -289,12 +289,6 @@ class FtrackTrayWrapper:
parent_menu.addMenu(tray_menu)
- def tray_start(self):
- self.validate()
-
- def tray_exit(self):
- self.stop_action_server()
-
# Definition of visibility of each menu actions
def set_menu_visibility(self):
self.tray_server_menu.menuAction().setVisible(self.bool_logged)
diff --git a/openpype/modules/ftrack/tray/login_dialog.py b/openpype/modules/ftrack/tray/login_dialog.py
index ce91c6d012..a6360a7380 100644
--- a/openpype/modules/ftrack/tray/login_dialog.py
+++ b/openpype/modules/ftrack/tray/login_dialog.py
@@ -134,11 +134,11 @@ class CredentialsDialog(QtWidgets.QDialog):
def fill_ftrack_url(self):
url = os.getenv("FTRACK_SERVER")
- if url == self.ftsite_input.text():
+ checked_url = self.check_url(url)
+ if checked_url == self.ftsite_input.text():
return
- checked_url = self.check_url(url)
- self.ftsite_input.setText(checked_url or "")
+ self.ftsite_input.setText(checked_url or "< Not set >")
enabled = bool(checked_url)
@@ -147,7 +147,15 @@ class CredentialsDialog(QtWidgets.QDialog):
self.api_input.setEnabled(enabled)
self.user_input.setEnabled(enabled)
- self.ftsite_input.setEnabled(enabled)
+
+ if not url:
+ self.btn_advanced.hide()
+ self.btn_simple.hide()
+ self.btn_ftrack_login.hide()
+ self.btn_login.hide()
+ self.note_label.hide()
+ self.api_input.hide()
+ self.user_input.hide()
def set_advanced_mode(self, is_advanced):
self._in_advance_mode = is_advanced
@@ -293,10 +301,9 @@ class CredentialsDialog(QtWidgets.QDialog):
url = url.strip("/ ")
if not url:
- self.set_error((
- "You need to specify a valid server URL, "
- "for example https://server-name.ftrackapp.com"
- ))
+ self.set_error(
+ "Ftrack URL is not defined in settings!"
+ )
return
if "http" not in url:
diff --git a/openpype/modules/idle_manager/idle_module.py b/openpype/modules/idle_manager/idle_module.py
index c06dbed78c..5dd5160aa7 100644
--- a/openpype/modules/idle_manager/idle_module.py
+++ b/openpype/modules/idle_manager/idle_module.py
@@ -1,3 +1,4 @@
+import platform
import collections
from abc import ABCMeta, abstractmethod
@@ -40,7 +41,12 @@ class IdleManager(PypeModule, ITrayService):
name = "idle_manager"
def initialize(self, module_settings):
- self.enabled = True
+ enabled = True
+ # Ignore on MacOs
+ # - pynput need root permissions and enabled access for application
+ if platform.system().lower() == "darwin":
+ enabled = False
+ self.enabled = enabled
self.time_callbacks = collections.defaultdict(list)
self.idle_thread = None
diff --git a/openpype/modules/launcher_action.py b/openpype/modules/launcher_action.py
index da0468d495..5ed8585b6a 100644
--- a/openpype/modules/launcher_action.py
+++ b/openpype/modules/launcher_action.py
@@ -22,7 +22,6 @@ class LauncherAction(PypeModule, ITrayAction):
# Register actions
if self.tray_initialized:
from openpype.tools.launcher import actions
- # actions.register_default_actions()
actions.register_config_actions()
actions_paths = self.manager.collect_plugin_paths()["actions"]
actions.register_actions_from_paths(actions_paths)
diff --git a/openpype/modules/settings_action.py b/openpype/modules/settings_action.py
index 371e190c12..3f7cb8c3ba 100644
--- a/openpype/modules/settings_action.py
+++ b/openpype/modules/settings_action.py
@@ -16,18 +16,20 @@ class ISettingsChangeListener:
}
"""
@abstractmethod
- def on_system_settings_save(self, old_value, new_value, changes):
+ def on_system_settings_save(
+ self, old_value, new_value, changes, new_value_metadata
+ ):
pass
@abstractmethod
def on_project_settings_save(
- self, old_value, new_value, changes, project_name
+ self, old_value, new_value, changes, project_name, new_value_metadata
):
pass
@abstractmethod
def on_project_anatomy_save(
- self, old_value, new_value, changes, project_name
+ self, old_value, new_value, changes, project_name, new_value_metadata
):
pass
diff --git a/openpype/modules/sync_server/providers/gdrive.py b/openpype/modules/sync_server/providers/gdrive.py
index f1ea24f601..b67e5a6cfa 100644
--- a/openpype/modules/sync_server/providers/gdrive.py
+++ b/openpype/modules/sync_server/providers/gdrive.py
@@ -7,7 +7,7 @@ from .abstract_provider import AbstractProvider
from googleapiclient.http import MediaFileUpload, MediaIoBaseDownload
from openpype.api import Logger
from openpype.api import get_system_settings
-from ..utils import time_function
+from ..utils import time_function, ResumableError
import time
@@ -63,7 +63,14 @@ class GDriveHandler(AbstractProvider):
return
self.service = self._get_gd_service()
- self.root = self._prepare_root_info()
+ try:
+ self.root = self._prepare_root_info()
+ except errors.HttpError:
+ log.warning("HttpError in sync loop, "
+ "trying next loop",
+ exc_info=True)
+ raise ResumableError
+
self._tree = tree
self.active = True
diff --git a/openpype/modules/sync_server/providers/lib.py b/openpype/modules/sync_server/providers/lib.py
index 58947e115d..01a5d50ba5 100644
--- a/openpype/modules/sync_server/providers/lib.py
+++ b/openpype/modules/sync_server/providers/lib.py
@@ -92,4 +92,4 @@ factory = ProviderFactory()
# 7 denotes number of files that could be synced in single loop - learned by
# trial and error
factory.register_provider('gdrive', GDriveHandler, 7)
-factory.register_provider('local_drive', LocalDriveHandler, 10)
+factory.register_provider('local_drive', LocalDriveHandler, 50)
diff --git a/openpype/modules/sync_server/sync_server.py b/openpype/modules/sync_server/sync_server.py
index e97c0e8844..9b305a1b2e 100644
--- a/openpype/modules/sync_server/sync_server.py
+++ b/openpype/modules/sync_server/sync_server.py
@@ -8,7 +8,7 @@ from concurrent.futures._base import CancelledError
from .providers import lib
from openpype.lib import PypeLogger
-from .utils import SyncStatus
+from .utils import SyncStatus, ResumableError
log = PypeLogger().get_logger("SyncServer")
@@ -232,6 +232,7 @@ class SyncServerThread(threading.Thread):
self.loop = None
self.is_running = False
self.executor = concurrent.futures.ThreadPoolExecutor(max_workers=3)
+ self.timer = None
def run(self):
self.is_running = True
@@ -266,8 +267,8 @@ class SyncServerThread(threading.Thread):
Returns:
"""
- try:
- while self.is_running and not self.module.is_paused():
+ while self.is_running and not self.module.is_paused():
+ try:
import time
start_time = None
self.module.set_sync_project_settings() # clean cache
@@ -384,17 +385,27 @@ class SyncServerThread(threading.Thread):
duration = time.time() - start_time
log.debug("One loop took {:.2f}s".format(duration))
- await asyncio.sleep(self.module.get_loop_delay(collection))
- except ConnectionResetError:
- log.warning("ConnectionResetError in sync loop, trying next loop",
- exc_info=True)
- except CancelledError:
- # just stopping server
- pass
- except Exception:
- self.stop()
- log.warning("Unhandled exception in sync loop, stopping server",
- exc_info=True)
+
+ delay = self.module.get_loop_delay(collection)
+ log.debug("Waiting for {} seconds to new loop".format(delay))
+ self.timer = asyncio.create_task(self.run_timer(delay))
+ await asyncio.gather(self.timer)
+
+ except ConnectionResetError:
+ log.warning("ConnectionResetError in sync loop, "
+ "trying next loop",
+ exc_info=True)
+ except CancelledError:
+ # just stopping server
+ pass
+ except ResumableError:
+ log.warning("ResumableError in sync loop, "
+ "trying next loop",
+ exc_info=True)
+ except Exception:
+ self.stop()
+ log.warning("Unhandled except. in sync loop, stopping server",
+ exc_info=True)
def stop(self):
"""Sets is_running flag to false, 'check_shutdown' shuts server down"""
@@ -417,6 +428,17 @@ class SyncServerThread(threading.Thread):
await asyncio.sleep(0.07)
self.loop.stop()
+ async def run_timer(self, delay):
+ """Wait for 'delay' seconds to start next loop"""
+ await asyncio.sleep(delay)
+
+ def reset_timer(self):
+ """Called when waiting for next loop should be skipped"""
+ log.debug("Resetting timer")
+ if self.timer:
+ self.timer.cancel()
+ self.timer = None
+
def _working_sites(self, collection):
if self.module.is_project_paused(collection):
log.debug("Both sites same, skipping")
diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py
index 59c3787789..a434af9fea 100644
--- a/openpype/modules/sync_server/sync_server_module.py
+++ b/openpype/modules/sync_server/sync_server_module.py
@@ -401,6 +401,24 @@ class SyncServerModule(PypeModule, ITrayModule):
return remote_site
+ def reset_timer(self):
+ """
+ Called when waiting for next loop should be skipped.
+
+ In case of user's involvement (reset site), start that right away.
+ """
+ self.sync_server_thread.reset_timer()
+
+ def get_enabled_projects(self):
+ """Returns list of projects which have SyncServer enabled."""
+ enabled_projects = []
+ for project in self.connection.projects():
+ project_name = project["name"]
+ project_settings = self.get_sync_project_setting(project_name)
+ if project_settings:
+ enabled_projects.append(project_name)
+
+ return enabled_projects
""" End of Public API """
def get_local_file_path(self, collection, site_name, file_path):
@@ -413,7 +431,7 @@ class SyncServerModule(PypeModule, ITrayModule):
return local_file_path
def _get_remote_sites_from_settings(self, sync_settings):
- if not self.enabled or not sync_settings['enabled']:
+ if not self.enabled or not sync_settings.get('enabled'):
return []
remote_sites = [self.DEFAULT_SITE, self.LOCAL_SITE]
@@ -424,7 +442,7 @@ class SyncServerModule(PypeModule, ITrayModule):
def _get_enabled_sites_from_settings(self, sync_settings):
sites = [self.DEFAULT_SITE]
- if self.enabled and sync_settings['enabled']:
+ if self.enabled and sync_settings.get('enabled'):
sites.append(self.LOCAL_SITE)
return sites
@@ -445,6 +463,11 @@ class SyncServerModule(PypeModule, ITrayModule):
if not self.enabled:
return
+ enabled_projects = self.get_enabled_projects()
+ if not enabled_projects:
+ self.enabled = False
+ return
+
self.lock = threading.Lock()
try:
diff --git a/openpype/modules/sync_server/tray/app.py b/openpype/modules/sync_server/tray/app.py
index 25fbf0e49a..2538675c51 100644
--- a/openpype/modules/sync_server/tray/app.py
+++ b/openpype/modules/sync_server/tray/app.py
@@ -7,7 +7,7 @@ from openpype import resources
from openpype.modules.sync_server.tray.widgets import (
SyncProjectListWidget,
- SyncRepresentationWidget
+ SyncRepresentationSummaryWidget
)
log = PypeLogger().get_logger("SyncServer")
@@ -47,7 +47,7 @@ class SyncServerWindow(QtWidgets.QDialog):
left_column_layout.addWidget(self.pause_btn)
left_column.setLayout(left_column_layout)
- repres = SyncRepresentationWidget(
+ repres = SyncRepresentationSummaryWidget(
sync_server,
project=self.projects.current_project,
parent=self)
@@ -78,7 +78,7 @@ class SyncServerWindow(QtWidgets.QDialog):
layout.addWidget(footer)
self.setLayout(body_layout)
- self.setWindowTitle("Sync Server")
+ self.setWindowTitle("Sync Queue")
self.projects.project_changed.connect(
lambda: repres.table_view.model().set_project(
diff --git a/openpype/modules/sync_server/tray/lib.py b/openpype/modules/sync_server/tray/lib.py
index 0282d79ea1..04bd1f568e 100644
--- a/openpype/modules/sync_server/tray/lib.py
+++ b/openpype/modules/sync_server/tray/lib.py
@@ -1,4 +1,7 @@
from Qt import QtCore
+import attr
+import abc
+import six
from openpype.lib import PypeLogger
@@ -20,6 +23,111 @@ ProviderRole = QtCore.Qt.UserRole + 2
ProgressRole = QtCore.Qt.UserRole + 4
DateRole = QtCore.Qt.UserRole + 6
FailedRole = QtCore.Qt.UserRole + 8
+HeaderNameRole = QtCore.Qt.UserRole + 10
+FullItemRole = QtCore.Qt.UserRole + 12
+
+
+@six.add_metaclass(abc.ABCMeta)
+class AbstractColumnFilter:
+
+ def __init__(self, column_name, dbcon=None):
+ self.column_name = column_name
+ self.dbcon = dbcon
+ self._search_variants = []
+
+ def search_variants(self):
+ """
+ Returns all flavors of search available for this column,
+ """
+ return self._search_variants
+
+ @abc.abstractmethod
+ def values(self):
+ """
+ Returns dict of available values for filter {'label':'value'}
+ """
+ pass
+
+ @abc.abstractmethod
+ def prepare_match_part(self, values):
+ """
+ Prepares format valid for $match part from 'values
+
+ Args:
+ values (dict): {'label': 'value'}
+ Returns:
+ (dict): {'COLUMN_NAME': {'$in': ['val1', 'val2']}}
+ """
+ pass
+
+
+class PredefinedSetFilter(AbstractColumnFilter):
+
+ def __init__(self, column_name, values):
+ super().__init__(column_name)
+ self._search_variants = ['checkbox']
+ self._values = values
+ if self._values and \
+ list(self._values.keys())[0] == list(self._values.values())[0]:
+ self._search_variants.append('text')
+
+ def values(self):
+ return {k: v for k, v in self._values.items()}
+
+ def prepare_match_part(self, values):
+ return {'$in': list(values.keys())}
+
+
+class RegexTextFilter(AbstractColumnFilter):
+
+ def __init__(self, column_name):
+ super().__init__(column_name)
+ self._search_variants = ['text']
+
+ def values(self):
+ return {}
+
+ def prepare_match_part(self, values):
+ """ values = {'text1 text2': 'text1 text2'} """
+ if not values:
+ return {}
+
+ regex_strs = set()
+ text = list(values.keys())[0] # only single key always expected
+ for word in text.split():
+ regex_strs.add('.*{}.*'.format(word))
+
+ return {"$regex": "|".join(regex_strs),
+ "$options": 'i'}
+
+
+class MultiSelectFilter(AbstractColumnFilter):
+
+ def __init__(self, column_name, values=None, dbcon=None):
+ super().__init__(column_name)
+ self._values = values
+ self.dbcon = dbcon
+ self._search_variants = ['checkbox']
+
+ def values(self):
+ if self._values:
+ return {k: v for k, v in self._values.items()}
+
+ recs = self.dbcon.find({'type': self.column_name}, {"name": 1,
+ "_id": -1})
+ values = {}
+ for item in recs:
+ values[item["name"]] = item["name"]
+ return dict(sorted(values.items(), key=lambda it: it[1]))
+
+ def prepare_match_part(self, values):
+ return {'$in': list(values.keys())}
+
+
+@attr.s
+class FilterDefinition:
+ type = attr.ib()
+ values = attr.ib(factory=list)
def pretty_size(value, suffix='B'):
@@ -50,3 +158,9 @@ def translate_provider_for_icon(sync_server, project, site):
if site == sync_server.DEFAULT_SITE:
return sync_server.DEFAULT_SITE
return sync_server.get_provider_for_site(project, site)
+
+
+def get_item_by_id(model, object_id):
+ index = model.get_index(object_id)
+ item = model.data(index, FullItemRole)
+ return item
diff --git a/openpype/modules/sync_server/tray/models.py b/openpype/modules/sync_server/tray/models.py
index 3cc53c6ec4..8fdd9487a4 100644
--- a/openpype/modules/sync_server/tray/models.py
+++ b/openpype/modules/sync_server/tray/models.py
@@ -56,17 +56,31 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
"""Returns project"""
return self._project
+ @property
+ def column_filtering(self):
+ return self._column_filtering
+
def rowCount(self, _index):
return len(self._data)
- def columnCount(self, _index):
+ def columnCount(self, _index=None):
return len(self._header)
- def headerData(self, section, orientation, role):
+ def headerData(self, section, orientation, role=Qt.DisplayRole):
+ if section >= len(self.COLUMN_LABELS):
+ return
+
if role == Qt.DisplayRole:
if orientation == Qt.Horizontal:
return self.COLUMN_LABELS[section][1]
+ if role == lib.HeaderNameRole:
+ if orientation == Qt.Horizontal:
+ return self.COLUMN_LABELS[section][0] # return name
+
+ def get_column(self, index):
+ return self.COLUMN_LABELS[index]
+
def get_header_index(self, value):
"""
Returns index of 'value' in headers
@@ -103,10 +117,10 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
self._rec_loaded = 0
if not representations:
- self.query = self.get_default_query(load_records)
+ self.query = self.get_query(load_records)
representations = self.dbcon.aggregate(self.query)
- self.add_page_records(self.local_site, self.remote_site,
+ self.add_page_records(self.active_site, self.remote_site,
representations)
self.endResetModel()
self.refresh_finished.emit()
@@ -138,13 +152,13 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
log.debug("fetchMore")
items_to_fetch = min(self._total_records - self._rec_loaded,
self.PAGE_SIZE)
- self.query = self.get_default_query(self._rec_loaded)
+ self.query = self.get_query(self._rec_loaded)
representations = self.dbcon.aggregate(self.query)
self.beginInsertRows(index,
self._rec_loaded,
self._rec_loaded + items_to_fetch - 1)
- self.add_page_records(self.local_site, self.remote_site,
+ self.add_page_records(self.active_site, self.remote_site,
representations)
self.endInsertRows()
@@ -156,6 +170,8 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
Sort is happening on a DB side, model is reset, db queried
again.
+ It remembers one last sort, adds it as secondary after new sort.
+
Args:
index (int): column index
order (int): 0|
@@ -170,8 +186,18 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
else:
order = -1
- self.sort = {self.SORT_BY_COLUMN[index]: order, '_id': 1}
- self.query = self.get_default_query()
+ backup_sort = dict(self.sort)
+
+ self.sort = {self.SORT_BY_COLUMN[index]: order} # reset
+ # add last one
+ for key, val in backup_sort.items():
+ if key != '_id':
+ self.sort[key] = val
+ break
+ # add default one
+ self.sort['_id'] = 1
+
+ self.query = self.get_query()
# import json
# log.debug(json.dumps(self.query, indent=4).\
# replace('False', 'false').\
@@ -180,16 +206,86 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
representations = self.dbcon.aggregate(self.query)
self.refresh(representations)
- def set_filter(self, word_filter):
+ def set_word_filter(self, word_filter):
"""
Adds text value filtering
Args:
word_filter (str): string inputted by user
"""
- self.word_filter = word_filter
+ self._word_filter = word_filter
self.refresh()
+ def get_filters(self):
+ """
+ Returns all available filter editors per column_name keys.
+ """
+ filters = {}
+ for column_name, _ in self.COLUMN_LABELS:
+ filter_rec = self.COLUMN_FILTERS.get(column_name)
+ if filter_rec:
+ filter_rec.dbcon = self.dbcon
+ filters[column_name] = filter_rec
+
+ return filters
+
+ def get_column_filter(self, index):
+ """
+ Returns filter object for column 'index
+
+ Args:
+ index(int): index of column in header
+
+ Returns:
+ (AbstractColumnFilter)
+ """
+ column_name = self._header[index]
+
+ filter_rec = self.COLUMN_FILTERS.get(column_name)
+ if filter_rec:
+ filter_rec.dbcon = self.dbcon # up-to-date db connection
+
+ return filter_rec
+
+ def set_column_filtering(self, checked_values):
+ """
+ Sets dictionary used in '$match' part of MongoDB aggregate
+
+ Args:
+ checked_values(dict): key:values ({'status':{1:"Foo",3:"Bar"}}
+
+ Modifies:
+ self._column_filtering : {'status': {'$in': [1, 2, 3]}}
+ """
+ filtering = {}
+ for column_name, dict_value in checked_values.items():
+ column_f = self.COLUMN_FILTERS.get(column_name)
+ if not column_f:
+ continue
+ column_f.dbcon = self.dbcon
+ filtering[column_name] = column_f.prepare_match_part(dict_value)
+
+ self._column_filtering = filtering
+
+ def get_column_filter_values(self, index):
+ """
+ Returns list of available values for filtering in the column
+
+ Args:
+ index(int): index of column in header
+
+ Returns:
+ (dict) of value: label shown in filtering menu
+ 'value' is used in MongoDB query, 'label' is human readable for
+ menu
+ for some columns ('subset') might be 'value' and 'label' same
+ """
+ filter_rec = self.get_column_filter(index)
+ if not filter_rec:
+ return {}
+
+ return filter_rec.values()
+
def set_project(self, project):
"""
Changes project, called after project selection is changed
@@ -199,7 +295,7 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
"""
self._project = project
self.sync_server.set_sync_project_settings()
- self.local_site = self.sync_server.get_active_site(self.project)
+ self.active_site = self.sync_server.get_active_site(self.project)
self.remote_site = self.sync_server.get_remote_site(self.project)
self.refresh()
@@ -251,7 +347,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
("files_count", "Files"),
("files_size", "Size"),
("priority", "Priority"),
- ("state", "Status")
+ ("status", "Status")
]
DEFAULT_SORT = {
@@ -259,18 +355,25 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
"_id": 1
}
SORT_BY_COLUMN = [
- "context.asset", # asset
- "context.subset", # subset
- "context.version", # version
- "context.representation", # representation
+ "asset", # asset
+ "subset", # subset
+ "version", # version
+ "representation", # representation
"updated_dt_local", # local created_dt
"updated_dt_remote", # remote created_dt
"files_count", # count of files
"files_size", # file size of all files
"context.asset", # priority TODO
- "status" # state
+ "status" # status
]
+ COLUMN_FILTERS = {
+ 'status': lib.PredefinedSetFilter('status', lib.STATUS),
+ 'subset': lib.RegexTextFilter('subset'),
+ 'asset': lib.RegexTextFilter('asset'),
+ 'representation': lib.MultiSelectFilter('representation')
+ }
+
refresh_started = QtCore.Signal()
refresh_finished = QtCore.Signal()
@@ -297,7 +400,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
files_count = attr.ib(default=None)
files_size = attr.ib(default=None)
priority = attr.ib(default=None)
- state = attr.ib(default=None)
+ status = attr.ib(default=None)
path = attr.ib(default=None)
def __init__(self, sync_server, header, project=None):
@@ -307,7 +410,10 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
self._project = project
self._rec_loaded = 0
self._total_records = 0 # how many documents query actually found
- self.word_filter = None
+ self._word_filter = None
+ self._column_filtering = {}
+
+ self._word_filter = None
self._initialized = False
if not self._project or self._project == lib.DUMMY_PROJECT:
@@ -316,15 +422,13 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
self.sync_server = sync_server
# TODO think about admin mode
# this is for regular user, always only single local and single remote
- self.local_site = self.sync_server.get_active_site(self.project)
+ self.active_site = self.sync_server.get_active_site(self.project)
self.remote_site = self.sync_server.get_remote_site(self.project)
- self.projection = self.get_default_projection()
-
self.sort = self.DEFAULT_SORT
- self.query = self.get_default_query()
- self.default_query = list(self.get_default_query())
+ self.query = self.get_query()
+ self.default_query = list(self.get_query())
representations = self.dbcon.aggregate(self.query)
self.refresh(representations)
@@ -336,6 +440,9 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
def data(self, index, role):
item = self._data[index.row()]
+ if role == lib.FullItemRole:
+ return item
+
header_value = self._header[index.column()]
if role == lib.ProviderRole:
if header_value == 'local_site':
@@ -359,9 +466,11 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
if role == lib.FailedRole:
if header_value == 'local_site':
- return item.state == lib.STATUS[2] and item.local_progress < 1
+ return item.status == lib.STATUS[2] and \
+ item.local_progress < 1
if header_value == 'remote_site':
- return item.state == lib.STATUS[2] and item.remote_progress < 1
+ return item.status == lib.STATUS[2] and \
+ item.remote_progress < 1
if role == Qt.DisplayRole:
# because of ImageDelegate
@@ -397,7 +506,6 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
remote_site)
for repre in result.get("paginatedResults"):
- context = repre.get("context").pop()
files = repre.get("files", [])
if isinstance(files, dict): # aggregate returns dictionary
files = [files]
@@ -420,17 +528,17 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
avg_progress_local = lib.convert_progress(
repre.get('avg_progress_local', '0'))
- if context.get("version"):
- version = "v{:0>3d}".format(context.get("version"))
+ if repre.get("version"):
+ version = "v{:0>3d}".format(repre.get("version"))
else:
version = "master"
item = self.SyncRepresentation(
repre.get("_id"),
- context.get("asset"),
- context.get("subset"),
+ repre.get("asset"),
+ repre.get("subset"),
version,
- context.get("representation"),
+ repre.get("representation"),
local_updated,
remote_updated,
local_site,
@@ -449,7 +557,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
self._data.append(item)
self._rec_loaded += 1
- def get_default_query(self, limit=0):
+ def get_query(self, limit=0):
"""
Returns basic aggregate query for main table.
@@ -461,7 +569,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
'sync_dt' - same for remote side
'local_site' - progress of repr on local side, 1 = finished
'remote_site' - progress on remote side, calculates from files
- 'state' -
+ 'status' -
0 - in progress
1 - failed
2 - queued
@@ -481,7 +589,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
if limit == 0:
limit = SyncRepresentationSummaryModel.PAGE_SIZE
- return [
+ aggr = [
{"$match": self.get_match_part()},
{'$unwind': '$files'},
# merge potentially unwinded records back to single per repre
@@ -492,7 +600,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
}},
'order_local': {
'$filter': {'input': '$files.sites', 'as': 'p',
- 'cond': {'$eq': ['$$p.name', self.local_site]}
+ 'cond': {'$eq': ['$$p.name', self.active_site]}
}}
}},
{'$addFields': {
@@ -584,16 +692,26 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
'paused_local': {'$sum': '$paused_local'},
'updated_dt_local': {'$max': "$updated_dt_local"}
}},
- {"$project": self.projection},
- {"$sort": self.sort},
- {
+ {"$project": self.projection}
+ ]
+
+ if self.column_filtering:
+ aggr.append(
+ {"$match": self.column_filtering}
+ )
+
+ aggr.extend(
+ [{"$sort": self.sort},
+ {
'$facet': {
'paginatedResults': [{'$skip': self._rec_loaded},
{'$limit': limit}],
'totalCount': [{'$count': 'count'}]
}
- }
- ]
+ }]
+ )
+
+ return aggr
def get_match_part(self):
"""
@@ -611,25 +729,26 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
"""
base_match = {
"type": "representation",
- 'files.sites.name': {'$all': [self.local_site,
+ 'files.sites.name': {'$all': [self.active_site,
self.remote_site]}
}
- if not self.word_filter:
+ if not self._word_filter:
return base_match
else:
- regex_str = '.*{}.*'.format(self.word_filter)
+ regex_str = '.*{}.*'.format(self._word_filter)
base_match['$or'] = [
{'context.subset': {'$regex': regex_str, '$options': 'i'}},
{'context.asset': {'$regex': regex_str, '$options': 'i'}},
{'context.representation': {'$regex': regex_str,
'$options': 'i'}}]
- if ObjectId.is_valid(self.word_filter):
- base_match['$or'] = [{'_id': ObjectId(self.word_filter)}]
+ if ObjectId.is_valid(self._word_filter):
+ base_match['$or'] = [{'_id': ObjectId(self._word_filter)}]
return base_match
- def get_default_projection(self):
+ @property
+ def projection(self):
"""
Projection part for aggregate query.
@@ -639,10 +758,10 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
(dict)
"""
return {
- "context.subset": 1,
- "context.asset": 1,
- "context.version": 1,
- "context.representation": 1,
+ "subset": {"$first": "$context.subset"},
+ "asset": {"$first": "$context.asset"},
+ "version": {"$first": "$context.version"},
+ "representation": {"$first": "$context.representation"},
"data.path": 1,
"files": 1,
'files_count': 1,
@@ -721,7 +840,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
("remote_site", "Remote site"),
("files_size", "Size"),
("priority", "Priority"),
- ("state", "Status")
+ ("status", "Status")
]
PAGE_SIZE = 30
@@ -733,10 +852,15 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
"updated_dt_local", # local created_dt
"updated_dt_remote", # remote created_dt
"size", # remote progress
- "context.asset", # priority TODO
- "status" # state
+ "size", # priority TODO
+ "status" # status
]
+ COLUMN_FILTERS = {
+ 'status': lib.PredefinedSetFilter('status', lib.STATUS),
+ 'file': lib.RegexTextFilter('file'),
+ }
+
refresh_started = QtCore.Signal()
refresh_finished = QtCore.Signal()
@@ -759,7 +883,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
remote_progress = attr.ib(default=None)
size = attr.ib(default=None)
priority = attr.ib(default=None)
- state = attr.ib(default=None)
+ status = attr.ib(default=None)
tries = attr.ib(default=None)
error = attr.ib(default=None)
path = attr.ib(default=None)
@@ -772,22 +896,20 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
self._project = project
self._rec_loaded = 0
self._total_records = 0 # how many documents query actually found
- self.word_filter = None
+ self._word_filter = None
self._id = _id
self._initialized = False
+ self._column_filtering = {}
self.sync_server = sync_server
# TODO think about admin mode
# this is for regular user, always only single local and single remote
- self.local_site = self.sync_server.get_active_site(self.project)
+ self.active_site = self.sync_server.get_active_site(self.project)
self.remote_site = self.sync_server.get_remote_site(self.project)
self.sort = self.DEFAULT_SORT
- # in case we would like to hide/show some columns
- self.projection = self.get_default_projection()
-
- self.query = self.get_default_query()
+ self.query = self.get_query()
representations = self.dbcon.aggregate(self.query)
self.refresh(representations)
@@ -798,6 +920,9 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
def data(self, index, role):
item = self._data[index.row()]
+ if role == lib.FullItemRole:
+ return item
+
header_value = self._header[index.column()]
if role == lib.ProviderRole:
if header_value == 'local_site':
@@ -821,9 +946,11 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
if role == lib.FailedRole:
if header_value == 'local_site':
- return item.state == lib.STATUS[2] and item.local_progress < 1
+ return item.status == lib.STATUS[2] and \
+ item.local_progress < 1
if header_value == 'remote_site':
- return item.state == lib.STATUS[2] and item.remote_progress < 1
+ return item.status == lib.STATUS[2] and \
+ item.remote_progress < 1
if role == Qt.DisplayRole:
# because of ImageDelegate
@@ -909,7 +1036,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
self._data.append(item)
self._rec_loaded += 1
- def get_default_query(self, limit=0):
+ def get_query(self, limit=0):
"""
Gets query that gets used when no extra sorting, filtering or
projecting is needed.
@@ -923,7 +1050,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
if limit == 0:
limit = SyncRepresentationSummaryModel.PAGE_SIZE
- return [
+ aggr = [
{"$match": self.get_match_part()},
{"$unwind": "$files"},
{'$addFields': {
@@ -933,7 +1060,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
}},
'order_local': {
'$filter': {'input': '$files.sites', 'as': 'p',
- 'cond': {'$eq': ['$$p.name', self.local_site]}
+ 'cond': {'$eq': ['$$p.name', self.active_site]}
}}
}},
{'$addFields': {
@@ -1019,7 +1146,16 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
]}
]}}
}},
- {"$project": self.projection},
+ {"$project": self.projection}
+ ]
+
+ if self.column_filtering:
+ aggr.append(
+ {"$match": self.column_filtering}
+ )
+ print(self.column_filtering)
+
+ aggr.extend([
{"$sort": self.sort},
{
'$facet': {
@@ -1028,7 +1164,9 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
'totalCount': [{'$count': 'count'}]
}
}
- ]
+ ])
+
+ return aggr
def get_match_part(self):
"""
@@ -1038,20 +1176,21 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
Returns:
(dict)
"""
- if not self.word_filter:
+ if not self._word_filter:
return {
"type": "representation",
"_id": self._id
}
else:
- regex_str = '.*{}.*'.format(self.word_filter)
+ regex_str = '.*{}.*'.format(self._word_filter)
return {
"type": "representation",
"_id": self._id,
'$or': [{'files.path': {'$regex': regex_str, '$options': 'i'}}]
}
- def get_default_projection(self):
+ @property
+ def projection(self):
"""
Projection part for aggregate query.
diff --git a/openpype/modules/sync_server/tray/widgets.py b/openpype/modules/sync_server/tray/widgets.py
index 5071ffa2b0..106fc4b8a8 100644
--- a/openpype/modules/sync_server/tray/widgets.py
+++ b/openpype/modules/sync_server/tray/widgets.py
@@ -1,6 +1,7 @@
import os
import subprocess
import sys
+from functools import partial
from Qt import QtWidgets, QtCore, QtGui
from Qt.QtCore import Qt
@@ -14,6 +15,7 @@ from openpype.api import get_local_site_id
from openpype.lib import PypeLogger
from avalon.tools.delegates import pretty_timestamp
+from avalon.vendor import qtawesome
from openpype.modules.sync_server.tray.models import (
SyncRepresentationSummaryModel,
@@ -40,6 +42,8 @@ class SyncProjectListWidget(ProjectListWidget):
self.local_site = None
self.icons = {}
+ self.layout().setContentsMargins(0, 0, 0, 0)
+
def validate_context_change(self):
return True
@@ -91,7 +95,6 @@ class SyncProjectListWidget(ProjectListWidget):
self.project_name = point_index.data(QtCore.Qt.DisplayRole)
menu = QtWidgets.QMenu()
- menu.setStyleSheet(style.load_stylesheet())
actions_mapping = {}
if self.sync_server.is_project_paused(self.project_name):
@@ -132,7 +135,7 @@ class SyncProjectListWidget(ProjectListWidget):
self.refresh()
-class SyncRepresentationWidget(QtWidgets.QWidget):
+class _SyncRepresentationWidget(QtWidgets.QWidget):
"""
Summary dialog with list of representations that matches current
settings 'local_site' and 'remote_site'.
@@ -140,87 +143,12 @@ class SyncRepresentationWidget(QtWidgets.QWidget):
active_changed = QtCore.Signal() # active index changed
message_generated = QtCore.Signal(str)
- default_widths = (
- ("asset", 220),
- ("subset", 190),
- ("version", 55),
- ("representation", 95),
- ("local_site", 170),
- ("remote_site", 170),
- ("files_count", 50),
- ("files_size", 60),
- ("priority", 50),
- ("state", 110)
- )
+ def _selection_changed(self, _new_selected, _all_selected):
+ idxs = self.selection_model.selectedRows()
+ self._selected_ids = []
- def __init__(self, sync_server, project=None, parent=None):
- super(SyncRepresentationWidget, self).__init__(parent)
-
- self.sync_server = sync_server
-
- self._selected_id = None # keep last selected _id
- self.representation_id = None
- self.site_name = None # to pause/unpause representation
-
- self.filter = QtWidgets.QLineEdit()
- self.filter.setPlaceholderText("Filter representations..")
-
- self._scrollbar_pos = None
-
- top_bar_layout = QtWidgets.QHBoxLayout()
- top_bar_layout.addWidget(self.filter)
-
- self.table_view = QtWidgets.QTableView()
- headers = [item[0] for item in self.default_widths]
-
- model = SyncRepresentationSummaryModel(sync_server, headers, project)
- self.table_view.setModel(model)
- self.table_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
- self.table_view.setSelectionMode(
- QtWidgets.QAbstractItemView.SingleSelection)
- self.table_view.setSelectionBehavior(
- QtWidgets.QAbstractItemView.SelectRows)
- self.table_view.horizontalHeader().setSortIndicator(
- -1, Qt.AscendingOrder)
- self.table_view.setSortingEnabled(True)
- self.table_view.horizontalHeader().setSortIndicatorShown(True)
- self.table_view.setAlternatingRowColors(True)
- self.table_view.verticalHeader().hide()
-
- column = self.table_view.model().get_header_index("local_site")
- delegate = ImageDelegate(self)
- self.table_view.setItemDelegateForColumn(column, delegate)
-
- column = self.table_view.model().get_header_index("remote_site")
- delegate = ImageDelegate(self)
- self.table_view.setItemDelegateForColumn(column, delegate)
-
- for column_name, width in self.default_widths:
- idx = model.get_header_index(column_name)
- self.table_view.setColumnWidth(idx, width)
-
- layout = QtWidgets.QVBoxLayout(self)
- layout.setContentsMargins(0, 0, 0, 0)
- layout.addLayout(top_bar_layout)
- layout.addWidget(self.table_view)
-
- self.table_view.doubleClicked.connect(self._double_clicked)
- self.filter.textChanged.connect(lambda: model.set_filter(
- self.filter.text()))
- self.table_view.customContextMenuRequested.connect(
- self._on_context_menu)
-
- model.refresh_started.connect(self._save_scrollbar)
- model.refresh_finished.connect(self._set_scrollbar)
- self.table_view.model().modelReset.connect(self._set_selection)
-
- self.selection_model = self.table_view.selectionModel()
- self.selection_model.selectionChanged.connect(self._selection_changed)
-
- def _selection_changed(self, _new_selection):
- index = self.selection_model.currentIndex()
- self._selected_id = \
- self.table_view.model().data(index, Qt.UserRole)
+ for index in idxs:
+ self._selected_ids.append(self.model.data(index, Qt.UserRole))
def _set_selection(self):
"""
@@ -228,151 +156,169 @@ class SyncRepresentationWidget(QtWidgets.QWidget):
Keep selection during model refresh.
"""
- if self._selected_id:
- index = self.table_view.model().get_index(self._selected_id)
+ existing_ids = []
+ for selected_id in self._selected_ids:
+ index = self.model.get_index(selected_id)
if index and index.isValid():
mode = QtCore.QItemSelectionModel.Select | \
QtCore.QItemSelectionModel.Rows
- self.selection_model.setCurrentIndex(index, mode)
- else:
- self._selected_id = None
+ self.selection_model.select(index, mode)
+ existing_ids.append(selected_id)
+
+ self._selected_ids = existing_ids
def _double_clicked(self, index):
"""
Opens representation dialog with all files after doubleclick
"""
- _id = self.table_view.model().data(index, Qt.UserRole)
+ _id = self.model.data(index, Qt.UserRole)
detail_window = SyncServerDetailWindow(
- self.sync_server, _id, self.table_view.model().project)
+ self.sync_server, _id, self.model.project)
detail_window.exec()
-
+
def _on_context_menu(self, point):
"""
Shows menu with loader actions on Right-click.
+
+ Supports multiple selects - adds all available actions, each
+ action handles if it appropriate for item itself, if not it skips.
"""
+ is_multi = len(self._selected_ids) > 1
point_index = self.table_view.indexAt(point)
- if not point_index.isValid():
+ if not point_index.isValid() and not is_multi:
return
- self.item = self.table_view.model()._data[point_index.row()]
- self.representation_id = self.item._id
- log.debug("menu representation _id:: {}".
- format(self.representation_id))
+ if is_multi:
+ index = self.model.get_index(self._selected_ids[0])
+ item = self.model.data(index, lib.FullItemRole)
+ else:
+ item = self.model.data(point_index, lib.FullItemRole)
+ action_kwarg_map, actions_mapping, menu = self._prepare_menu(item,
+ is_multi)
+
+ result = menu.exec_(QtGui.QCursor.pos())
+ if result:
+ to_run = actions_mapping[result]
+ to_run_kwargs = action_kwarg_map.get(result, {})
+ if to_run:
+ to_run(**to_run_kwargs)
+
+ self.model.refresh()
+
+ def _prepare_menu(self, item, is_multi):
menu = QtWidgets.QMenu()
- menu.setStyleSheet(style.load_stylesheet())
+
actions_mapping = {}
- actions_kwargs_mapping = {}
+ action_kwarg_map = {}
- local_site = self.item.local_site
- local_progress = self.item.local_progress
- remote_site = self.item.remote_site
- remote_progress = self.item.remote_progress
+ active_site = self.model.active_site
+ remote_site = self.model.remote_site
- for site, progress in {local_site: local_progress,
+ local_progress = item.local_progress
+ remote_progress = item.remote_progress
+
+ project = self.model.project
+
+ for site, progress in {active_site: local_progress,
remote_site: remote_progress}.items():
- project = self.table_view.model().project
- provider = self.sync_server.get_provider_for_site(project,
- site)
+ provider = self.sync_server.get_provider_for_site(project, site)
if provider == 'local_drive':
if 'studio' in site:
txt = " studio version"
else:
txt = " local version"
action = QtWidgets.QAction("Open in explorer" + txt)
- if progress == 1.0:
+ if progress == 1.0 or is_multi:
actions_mapping[action] = self._open_in_explorer
- actions_kwargs_mapping[action] = {'site': site}
+ action_kwarg_map[action] = \
+ self._get_action_kwargs(site)
menu.addAction(action)
- # progress smaller then 1.0 --> in progress or queued
- if local_progress < 1.0:
- self.site_name = local_site
- else:
- self.site_name = remote_site
-
- if self.item.state in [lib.STATUS[0], lib.STATUS[1]]:
- action = QtWidgets.QAction("Pause")
- actions_mapping[action] = self._pause
- menu.addAction(action)
-
- if self.item.state == lib.STATUS[3]:
- action = QtWidgets.QAction("Unpause")
- actions_mapping[action] = self._unpause
- menu.addAction(action)
-
- # if self.item.state == lib.STATUS[1]:
- # action = QtWidgets.QAction("Open error detail")
- # actions_mapping[action] = self._show_detail
- # menu.addAction(action)
-
- if remote_progress == 1.0:
+ if remote_progress == 1.0 or is_multi:
action = QtWidgets.QAction("Re-sync Active site")
- actions_mapping[action] = self._reset_local_site
+ action_kwarg_map[action] = self._get_action_kwargs(active_site)
+ actions_mapping[action] = self._reset_site
menu.addAction(action)
- if local_progress == 1.0:
+ if local_progress == 1.0 or is_multi:
action = QtWidgets.QAction("Re-sync Remote site")
- actions_mapping[action] = self._reset_remote_site
+ action_kwarg_map[action] = self._get_action_kwargs(remote_site)
+ actions_mapping[action] = self._reset_site
menu.addAction(action)
- if local_site != self.sync_server.DEFAULT_SITE:
+ if active_site == get_local_site_id():
action = QtWidgets.QAction("Completely remove from local")
+ action_kwarg_map[action] = self._get_action_kwargs(active_site)
actions_mapping[action] = self._remove_site
menu.addAction(action)
- else:
- action = QtWidgets.QAction("Mark for sync to local")
- actions_mapping[action] = self._add_site
- menu.addAction(action)
+
+ # # temp for testing only !!!
+ # action = QtWidgets.QAction("Download")
+ # action_kwarg_map[action] = self._get_action_kwargs(active_site)
+ # actions_mapping[action] = self._add_site
+ # menu.addAction(action)
if not actions_mapping:
action = QtWidgets.QAction("< No action >")
actions_mapping[action] = None
menu.addAction(action)
- result = menu.exec_(QtGui.QCursor.pos())
- if result:
- to_run = actions_mapping[result]
- to_run_kwargs = actions_kwargs_mapping.get(result, {})
- if to_run:
- to_run(**to_run_kwargs)
+ return action_kwarg_map, actions_mapping, menu
- self.table_view.model().refresh()
+ def _pause(self, selected_ids=None):
+ log.debug("Pause {}".format(selected_ids))
+ for representation_id in selected_ids:
+ item = lib.get_item_by_id(self.model, representation_id)
+ if item.status not in [lib.STATUS[0], lib.STATUS[1]]:
+ continue
+ for site_name in [self.model.active_site, self.model.remote_site]:
+ check_progress = self._get_progress(item, site_name)
+ if check_progress < 1:
+ self.sync_server.pause_representation(self.model.project,
+ representation_id,
+ site_name)
- def _pause(self):
- self.sync_server.pause_representation(self.table_view.model().project,
- self.representation_id,
- self.site_name)
- self.site_name = None
- self.message_generated.emit("Paused {}".format(self.representation_id))
+ self.message_generated.emit("Paused {}".format(representation_id))
- def _unpause(self):
- self.sync_server.unpause_representation(
- self.table_view.model().project,
- self.representation_id,
- self.site_name)
- self.site_name = None
- self.message_generated.emit("Unpaused {}".format(
- self.representation_id))
+ def _unpause(self, selected_ids=None):
+ log.debug("UnPause {}".format(selected_ids))
+ for representation_id in selected_ids:
+ item = lib.get_item_by_id(self.model, representation_id)
+ if item.status not in lib.STATUS[3]:
+ continue
+ for site_name in [self.model.active_site, self.model.remote_site]:
+ check_progress = self._get_progress(item, site_name)
+ if check_progress < 1:
+ self.sync_server.unpause_representation(
+ self.model.project,
+ representation_id,
+ site_name)
+
+ self.message_generated.emit("Unpause {}".format(representation_id))
# temporary here for testing, will be removed TODO
- def _add_site(self):
- log.info(self.representation_id)
- project_name = self.table_view.model().project
- local_site_name = get_local_site_id()
- try:
- self.sync_server.add_site(
- project_name,
- self.representation_id,
- local_site_name
- )
- self.message_generated.emit(
- "Site {} added for {}".format(local_site_name,
- self.representation_id))
- except ValueError as exp:
- self.message_generated.emit("Error {}".format(str(exp)))
+ def _add_site(self, selected_ids=None, site_name=None):
+ log.debug("Add site {}:{}".format(selected_ids, site_name))
+ for representation_id in selected_ids:
+ item = lib.get_item_by_id(self.model, representation_id)
+ if item.local_site == site_name or item.remote_site == site_name:
+ # site already exists skip
+ continue
- def _remove_site(self):
+ try:
+ self.sync_server.add_site(
+ self.model.project,
+ representation_id,
+ site_name)
+ self.message_generated.emit(
+ "Site {} added for {}".format(site_name,
+ representation_id))
+ except ValueError as exp:
+ self.message_generated.emit("Error {}".format(str(exp)))
+ self.sync_server.reset_timer()
+
+ def _remove_site(self, selected_ids=None, site_name=None):
"""
Removes site record AND files.
@@ -382,65 +328,90 @@ class SyncRepresentationWidget(QtWidgets.QWidget):
This could only happen when artist work on local machine, not
connected to studio mounted drives.
"""
- log.info("Removing {}".format(self.representation_id))
- try:
- local_site = get_local_site_id()
- self.sync_server.remove_site(
- self.table_view.model().project,
- self.representation_id,
- local_site,
- True)
- self.message_generated.emit("Site {} removed".format(local_site))
- except ValueError as exp:
- self.message_generated.emit("Error {}".format(str(exp)))
- self.table_view.model().refresh(
- load_records=self.table_view.model()._rec_loaded)
+ log.debug("Remove site {}:{}".format(selected_ids, site_name))
+ for representation_id in selected_ids:
+ log.info("Removing {}".format(representation_id))
+ try:
+ self.sync_server.remove_site(
+ self.model.project,
+ representation_id,
+ site_name,
+ True)
+ self.message_generated.emit(
+ "Site {} removed".format(site_name))
+ except ValueError as exp:
+ self.message_generated.emit("Error {}".format(str(exp)))
- def _reset_local_site(self):
+ self.model.refresh(
+ load_records=self.model._rec_loaded)
+ self.sync_server.reset_timer()
+
+ def _reset_site(self, selected_ids=None, site_name=None):
"""
Removes errors or success metadata for particular file >> forces
redo of upload/download
"""
- self.sync_server.reset_provider_for_file(
- self.table_view.model().project,
- self.representation_id,
- 'local')
- self.table_view.model().refresh(
- load_records=self.table_view.model()._rec_loaded)
+ log.debug("Reset site {}:{}".format(selected_ids, site_name))
+ for representation_id in selected_ids:
+ item = lib.get_item_by_id(self.model, representation_id)
+ check_progress = self._get_progress(item, site_name, True)
- def _reset_remote_site(self):
- """
- Removes errors or success metadata for particular file >> forces
- redo of upload/download
- """
- self.sync_server.reset_provider_for_file(
- self.table_view.model().project,
- self.representation_id,
- 'remote')
- self.table_view.model().refresh(
- load_records=self.table_view.model()._rec_loaded)
+ # do not reset if opposite side is not fully there
+ if check_progress != 1:
+ log.debug("Not fully available {} on other side, skipping".
+ format(check_progress))
+ continue
- def _open_in_explorer(self, site):
- if not self.item:
- return
+ self.sync_server.reset_provider_for_file(
+ self.model.project,
+ representation_id,
+ site_name=site_name,
+ force=True)
- fpath = self.item.path
- project = self.table_view.model().project
- fpath = self.sync_server.get_local_file_path(project,
- site,
- fpath)
+ self.model.refresh(
+ load_records=self.model._rec_loaded)
+ self.sync_server.reset_timer()
- fpath = os.path.normpath(os.path.dirname(fpath))
- if os.path.isdir(fpath):
- if 'win' in sys.platform: # windows
- subprocess.Popen('explorer "%s"' % fpath)
- elif sys.platform == 'darwin': # macOS
- subprocess.Popen(['open', fpath])
- else: # linux
- try:
- subprocess.Popen(['xdg-open', fpath])
- except OSError:
- raise OSError('unsupported xdg-open call??')
+ def _open_in_explorer(self, selected_ids=None, site_name=None):
+ log.debug("Open in Explorer {}:{}".format(selected_ids, site_name))
+ for selected_id in selected_ids:
+ item = lib.get_item_by_id(self.model, selected_id)
+ if not item:
+ return
+
+ fpath = item.path
+ project = self.model.project
+ fpath = self.sync_server.get_local_file_path(project,
+ site_name,
+ fpath)
+
+ fpath = os.path.normpath(os.path.dirname(fpath))
+ if os.path.isdir(fpath):
+ if 'win' in sys.platform: # windows
+ subprocess.Popen('explorer "%s"' % fpath)
+ elif sys.platform == 'darwin': # macOS
+ subprocess.Popen(['open', fpath])
+ else: # linux
+ try:
+ subprocess.Popen(['xdg-open', fpath])
+ except OSError:
+ raise OSError('unsupported xdg-open call??')
+
+ def _get_progress(self, item, site_name, opposite=False):
+ """Returns progress value according to site (side)"""
+ progress = {'local': item.local_progress,
+ 'remote': item.remote_progress}
+ side = 'remote'
+ if site_name == self.model.active_site:
+ side = 'local'
+ if opposite:
+ side = 'remote' if side == 'local' else 'local'
+
+ return progress[side]
+
+ def _get_action_kwargs(self, site_name):
+ """Default format of kwargs for action"""
+ return {"selected_ids": self._selected_ids, "site_name": site_name}
def _save_scrollbar(self):
self._scrollbar_pos = self.table_view.verticalScrollBar().value()
@@ -450,7 +421,155 @@ class SyncRepresentationWidget(QtWidgets.QWidget):
self.table_view.verticalScrollBar().setValue(self._scrollbar_pos)
-class SyncRepresentationDetailWidget(QtWidgets.QWidget):
+class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
+
+ default_widths = (
+ ("asset", 190),
+ ("subset", 170),
+ ("version", 60),
+ ("representation", 145),
+ ("local_site", 160),
+ ("remote_site", 160),
+ ("files_count", 50),
+ ("files_size", 60),
+ ("priority", 70),
+ ("status", 110)
+ )
+
+ def __init__(self, sync_server, project=None, parent=None):
+ super(SyncRepresentationSummaryWidget, self).__init__(parent)
+
+ self.sync_server = sync_server
+
+ self._selected_ids = [] # keep last selected _id
+
+ txt_filter = QtWidgets.QLineEdit()
+ txt_filter.setPlaceholderText("Quick filter representations..")
+ txt_filter.setClearButtonEnabled(True)
+ txt_filter.addAction(
+ qtawesome.icon("fa.filter", color="gray"),
+ QtWidgets.QLineEdit.LeadingPosition)
+ self.txt_filter = txt_filter
+
+ self._scrollbar_pos = None
+
+ top_bar_layout = QtWidgets.QHBoxLayout()
+ top_bar_layout.addWidget(self.txt_filter)
+
+ table_view = QtWidgets.QTableView()
+ headers = [item[0] for item in self.default_widths]
+
+ model = SyncRepresentationSummaryModel(sync_server, headers, project)
+ table_view.setModel(model)
+ table_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
+ table_view.setSelectionMode(
+ QtWidgets.QAbstractItemView.ExtendedSelection)
+ table_view.setSelectionBehavior(
+ QtWidgets.QAbstractItemView.SelectRows)
+ table_view.horizontalHeader().setSortIndicator(
+ -1, Qt.AscendingOrder)
+ table_view.setAlternatingRowColors(True)
+ table_view.verticalHeader().hide()
+
+ column = table_view.model().get_header_index("local_site")
+ delegate = ImageDelegate(self)
+ table_view.setItemDelegateForColumn(column, delegate)
+
+ column = table_view.model().get_header_index("remote_site")
+ delegate = ImageDelegate(self)
+ table_view.setItemDelegateForColumn(column, delegate)
+
+ layout = QtWidgets.QVBoxLayout(self)
+ layout.setContentsMargins(0, 0, 0, 0)
+ layout.addLayout(top_bar_layout)
+ layout.addWidget(table_view)
+
+ self.table_view = table_view
+ self.model = model
+
+ horizontal_header = HorizontalHeader(self)
+
+ table_view.setHorizontalHeader(horizontal_header)
+ table_view.setSortingEnabled(True)
+
+ for column_name, width in self.default_widths:
+ idx = model.get_header_index(column_name)
+ table_view.setColumnWidth(idx, width)
+
+ table_view.doubleClicked.connect(self._double_clicked)
+ self.txt_filter.textChanged.connect(lambda: model.set_word_filter(
+ self.txt_filter.text()))
+ table_view.customContextMenuRequested.connect(self._on_context_menu)
+
+ model.refresh_started.connect(self._save_scrollbar)
+ model.refresh_finished.connect(self._set_scrollbar)
+ model.modelReset.connect(self._set_selection)
+
+ self.selection_model = self.table_view.selectionModel()
+ self.selection_model.selectionChanged.connect(self._selection_changed)
+
+ def _prepare_menu(self, item, is_multi):
+ action_kwarg_map, actions_mapping, menu = \
+ super()._prepare_menu(item, is_multi)
+
+ if item.status in [lib.STATUS[0], lib.STATUS[1]] or is_multi:
+ action = QtWidgets.QAction("Pause in queue")
+ actions_mapping[action] = self._pause
+ # pause handles which site_name it will pause itself
+ action_kwarg_map[action] = {"selected_ids": self._selected_ids}
+ menu.addAction(action)
+
+ if item.status == lib.STATUS[3] or is_multi:
+ action = QtWidgets.QAction("Unpause in queue")
+ actions_mapping[action] = self._unpause
+ action_kwarg_map[action] = {"selected_ids": self._selected_ids}
+ menu.addAction(action)
+
+ return action_kwarg_map, actions_mapping, menu
+
+
+class SyncServerDetailWindow(QtWidgets.QDialog):
+ """Wrapper window for SyncRepresentationDetailWidget
+
+ Creates standalone window with list of files for selected repre_id.
+ """
+ def __init__(self, sync_server, _id, project, parent=None):
+ log.debug(
+ "!!! SyncServerDetailWindow _id:: {}".format(_id))
+ super(SyncServerDetailWindow, self).__init__(parent)
+ self.setWindowFlags(QtCore.Qt.Window)
+ self.setFocusPolicy(QtCore.Qt.StrongFocus)
+
+ self.setStyleSheet(style.load_stylesheet())
+ self.setWindowIcon(QtGui.QIcon(style.app_icon_path()))
+ self.resize(1000, 400)
+
+ body = QtWidgets.QWidget()
+ footer = QtWidgets.QWidget()
+ footer.setFixedHeight(20)
+
+ container = SyncRepresentationDetailWidget(sync_server, _id, project,
+ parent=self)
+ body_layout = QtWidgets.QHBoxLayout(body)
+ body_layout.addWidget(container)
+ body_layout.setContentsMargins(0, 0, 0, 0)
+
+ self.message = QtWidgets.QLabel()
+ self.message.hide()
+
+ footer_layout = QtWidgets.QVBoxLayout(footer)
+ footer_layout.addWidget(self.message)
+ footer_layout.setContentsMargins(0, 0, 0, 0)
+
+ layout = QtWidgets.QVBoxLayout(self)
+ layout.addWidget(body)
+ layout.addWidget(footer)
+
+ self.setLayout(body_layout)
+ self.setWindowTitle("Sync Representation Detail")
+
+
+class SyncRepresentationDetailWidget(_SyncRepresentationWidget):
"""
Widget to display list of synchronizable files for single repre.
@@ -466,243 +585,197 @@ class SyncRepresentationDetailWidget(QtWidgets.QWidget):
("local_site", 185),
("remote_site", 185),
("size", 60),
- ("priority", 25),
- ("state", 110)
+ ("priority", 60),
+ ("status", 110)
)
def __init__(self, sync_server, _id=None, project=None, parent=None):
super(SyncRepresentationDetailWidget, self).__init__(parent)
log.debug("Representation_id:{}".format(_id))
- self.representation_id = _id
- self.item = None # set to item that mouse was clicked over
self.project = project
self.sync_server = sync_server
- self._selected_id = None
+ self.representation_id = _id
+ self._selected_ids = []
- self.filter = QtWidgets.QLineEdit()
- self.filter.setPlaceholderText("Filter representation..")
+ self.txt_filter = QtWidgets.QLineEdit()
+ self.txt_filter.setPlaceholderText("Quick filter representation..")
+ self.txt_filter.setClearButtonEnabled(True)
+ self.txt_filter.addAction(qtawesome.icon("fa.filter", color="gray"),
+ QtWidgets.QLineEdit.LeadingPosition)
self._scrollbar_pos = None
top_bar_layout = QtWidgets.QHBoxLayout()
- top_bar_layout.addWidget(self.filter)
+ top_bar_layout.addWidget(self.txt_filter)
- self.table_view = QtWidgets.QTableView()
+ table_view = QtWidgets.QTableView()
headers = [item[0] for item in self.default_widths]
model = SyncRepresentationDetailModel(sync_server, headers, _id,
project)
- self.table_view.setModel(model)
- self.table_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
- self.table_view.setSelectionMode(
- QtWidgets.QAbstractItemView.SingleSelection)
- self.table_view.setSelectionBehavior(
+ table_view.setModel(model)
+ table_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
+ table_view.setSelectionMode(
+ QtWidgets.QAbstractItemView.ExtendedSelection)
+ table_view.setSelectionBehavior(
QtWidgets.QTableView.SelectRows)
- self.table_view.horizontalHeader().setSortIndicator(-1,
- Qt.AscendingOrder)
- self.table_view.setSortingEnabled(True)
- self.table_view.horizontalHeader().setSortIndicatorShown(True)
- self.table_view.setAlternatingRowColors(True)
- self.table_view.verticalHeader().hide()
+ table_view.horizontalHeader().setSortIndicator(-1, Qt.AscendingOrder)
+ table_view.horizontalHeader().setSortIndicatorShown(True)
+ table_view.setAlternatingRowColors(True)
+ table_view.verticalHeader().hide()
- column = self.table_view.model().get_header_index("local_site")
+ column = model.get_header_index("local_site")
delegate = ImageDelegate(self)
- self.table_view.setItemDelegateForColumn(column, delegate)
+ table_view.setItemDelegateForColumn(column, delegate)
- column = self.table_view.model().get_header_index("remote_site")
+ column = model.get_header_index("remote_site")
delegate = ImageDelegate(self)
- self.table_view.setItemDelegateForColumn(column, delegate)
-
- for column_name, width in self.default_widths:
- idx = model.get_header_index(column_name)
- self.table_view.setColumnWidth(idx, width)
+ table_view.setItemDelegateForColumn(column, delegate)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.addLayout(top_bar_layout)
- layout.addWidget(self.table_view)
+ layout.addWidget(table_view)
- self.filter.textChanged.connect(lambda: model.set_filter(
- self.filter.text()))
- self.table_view.customContextMenuRequested.connect(
- self._on_context_menu)
+ self.model = model
+
+ self.selection_model = table_view.selectionModel()
+ self.selection_model.selectionChanged.connect(self._selection_changed)
+
+ horizontal_header = HorizontalHeader(self)
+
+ table_view.setHorizontalHeader(horizontal_header)
+ table_view.setSortingEnabled(True)
+
+ for column_name, width in self.default_widths:
+ idx = model.get_header_index(column_name)
+ table_view.setColumnWidth(idx, width)
+
+ self.table_view = table_view
+
+ self.txt_filter.textChanged.connect(lambda: model.set_word_filter(
+ self.txt_filter.text()))
+ table_view.customContextMenuRequested.connect(self._on_context_menu)
model.refresh_started.connect(self._save_scrollbar)
model.refresh_finished.connect(self._set_scrollbar)
- self.table_view.model().modelReset.connect(self._set_selection)
+ model.modelReset.connect(self._set_selection)
- self.selection_model = self.table_view.selectionModel()
- self.selection_model.selectionChanged.connect(self._selection_changed)
-
- def _selection_changed(self):
- index = self.selection_model.currentIndex()
- self._selected_id = self.table_view.model().data(index, Qt.UserRole)
-
- def _set_selection(self):
- """
- Sets selection to 'self._selected_id' if exists.
-
- Keep selection during model refresh.
- """
- if self._selected_id:
- index = self.table_view.model().get_index(self._selected_id)
- if index and index.isValid():
- mode = QtCore.QItemSelectionModel.Select | \
- QtCore.QItemSelectionModel.Rows
- self.selection_model.setCurrentIndex(index, mode)
- else:
- self._selected_id = None
-
- def _show_detail(self):
+ def _show_detail(self, selected_ids=None):
"""
Shows windows with error message for failed sync of a file.
"""
- dt = max(self.item.created_dt, self.item.sync_dt)
- detail_window = SyncRepresentationErrorWindow(self.item._id,
- self.project,
- dt,
- self.item.tries,
- self.item.error)
+ detail_window = SyncRepresentationErrorWindow(self.model, selected_ids)
+
detail_window.exec()
- def _on_context_menu(self, point):
- """
- Shows menu with loader actions on Right-click.
- """
- point_index = self.table_view.indexAt(point)
- if not point_index.isValid():
- return
+ def _prepare_menu(self, item, is_multi):
+ """Adds view (and model) dependent actions to default ones"""
+ action_kwarg_map, actions_mapping, menu = \
+ super()._prepare_menu(item, is_multi)
- self.item = self.table_view.model()._data[point_index.row()]
-
- menu = QtWidgets.QMenu()
- menu.setStyleSheet(style.load_stylesheet())
- actions_mapping = {}
- actions_kwargs_mapping = {}
-
- local_site = self.item.local_site
- local_progress = self.item.local_progress
- remote_site = self.item.remote_site
- remote_progress = self.item.remote_progress
-
- for site, progress in {local_site: local_progress,
- remote_site: remote_progress}.items():
- project = self.table_view.model().project
- provider = self.sync_server.get_provider_for_site(project,
- site)
- if provider == 'local_drive':
- if 'studio' in site:
- txt = " studio version"
- else:
- txt = " local version"
- action = QtWidgets.QAction("Open in explorer" + txt)
- if progress == 1:
- actions_mapping[action] = self._open_in_explorer
- actions_kwargs_mapping[action] = {'site': site}
- menu.addAction(action)
-
- if self.item.state == lib.STATUS[2]:
+ if item.status == lib.STATUS[2] or is_multi:
action = QtWidgets.QAction("Open error detail")
actions_mapping[action] = self._show_detail
+ action_kwarg_map[action] = {"selected_ids": self._selected_ids}
+
menu.addAction(action)
- if float(remote_progress) == 1.0:
- action = QtWidgets.QAction("Re-sync active site")
- actions_mapping[action] = self._reset_local_site
- menu.addAction(action)
+ return action_kwarg_map, actions_mapping, menu
- if float(local_progress) == 1.0:
- action = QtWidgets.QAction("Re-sync remote site")
- actions_mapping[action] = self._reset_remote_site
- menu.addAction(action)
-
- if not actions_mapping:
- action = QtWidgets.QAction("< No action >")
- actions_mapping[action] = None
- menu.addAction(action)
-
- result = menu.exec_(QtGui.QCursor.pos())
- if result:
- to_run = actions_mapping[result]
- to_run_kwargs = actions_kwargs_mapping.get(result, {})
- if to_run:
- to_run(**to_run_kwargs)
-
- def _reset_local_site(self):
+ def _reset_site(self, selected_ids=None, site_name=None):
"""
Removes errors or success metadata for particular file >> forces
redo of upload/download
"""
- self.sync_server.reset_provider_for_file(
- self.table_view.model().project,
- self.representation_id,
- 'local',
- self.item._id)
- self.table_view.model().refresh(
- load_records=self.table_view.model()._rec_loaded)
+ for file_id in selected_ids:
+ item = lib.get_item_by_id(self.model, file_id)
+ check_progress = self._get_progress(item, site_name, True)
- def _reset_remote_site(self):
- """
- Removes errors or success metadata for particular file >> forces
- redo of upload/download
- """
- self.sync_server.reset_provider_for_file(
- self.table_view.model().project,
- self.representation_id,
- 'remote',
- self.item._id)
- self.table_view.model().refresh(
- load_records=self.table_view.model()._rec_loaded)
+ # do not reset if opposite side is not fully there
+ if check_progress != 1:
+ log.debug("Not fully available {} on other side, skipping".
+ format(check_progress))
+ continue
- def _open_in_explorer(self, site):
- if not self.item:
- return
+ self.sync_server.reset_provider_for_file(
+ self.model.project,
+ self.representation_id,
+ site_name=site_name,
+ file_id=file_id,
+ force=True)
+ self.model.refresh(
+ load_records=self.model._rec_loaded)
- fpath = self.item.path
- project = self.project
- fpath = self.sync_server.get_local_file_path(project, site, fpath)
- fpath = os.path.normpath(os.path.dirname(fpath))
- if os.path.isdir(fpath):
- if 'win' in sys.platform: # windows
- subprocess.Popen('explorer "%s"' % fpath)
- elif sys.platform == 'darwin': # macOS
- subprocess.Popen(['open', fpath])
- else: # linux
- try:
- subprocess.Popen(['xdg-open', fpath])
- except OSError:
- raise OSError('unsupported xdg-open call??')
+class SyncRepresentationErrorWindow(QtWidgets.QDialog):
+ """Wrapper window to show errors during sync on file(s)"""
+ def __init__(self, model, selected_ids, parent=None):
+ super(SyncRepresentationErrorWindow, self).__init__(parent)
+ self.setWindowFlags(QtCore.Qt.Window)
+ self.setFocusPolicy(QtCore.Qt.StrongFocus)
- def _save_scrollbar(self):
- self._scrollbar_pos = self.table_view.verticalScrollBar().value()
+ self.setStyleSheet(style.load_stylesheet())
+ self.setWindowIcon(QtGui.QIcon(style.app_icon_path()))
+ self.resize(900, 150)
- def _set_scrollbar(self):
- if self._scrollbar_pos:
- self.table_view.verticalScrollBar().setValue(self._scrollbar_pos)
+ body = QtWidgets.QWidget()
+
+ container = SyncRepresentationErrorWidget(model,
+ selected_ids,
+ parent=self)
+ body_layout = QtWidgets.QHBoxLayout(body)
+ body_layout.addWidget(container)
+ body_layout.setContentsMargins(0, 0, 0, 0)
+
+ message = QtWidgets.QLabel()
+ message.hide()
+
+ layout = QtWidgets.QVBoxLayout(self)
+ layout.addWidget(body)
+
+ self.setLayout(body_layout)
+ self.setWindowTitle("Sync Representation Error Detail")
class SyncRepresentationErrorWidget(QtWidgets.QWidget):
"""
- Dialog to show when sync error happened, prints error message
+ Dialog to show when sync error happened, prints formatted error message
"""
-
- def __init__(self, _id, dt, tries, msg, parent=None):
+ def __init__(self, model, selected_ids, parent=None):
super(SyncRepresentationErrorWidget, self).__init__(parent)
- layout = QtWidgets.QHBoxLayout(self)
+ layout = QtWidgets.QVBoxLayout(self)
- txts = []
- txts.append("{}: {}".format("Last update date", pretty_timestamp(dt)))
- txts.append("{}: {}".format("Retries", str(tries)))
- txts.append("{}: {}".format("Error message", msg))
+ no_errors = True
+ for file_id in selected_ids:
+ item = lib.get_item_by_id(model, file_id)
+ if not item.created_dt or not item.sync_dt or not item.error:
+ continue
- text_area = QtWidgets.QPlainTextEdit("\n\n".join(txts))
- text_area.setReadOnly(True)
- layout.addWidget(text_area)
+ no_errors = False
+ dt = max(item.created_dt, item.sync_dt)
+
+ txts = []
+ txts.append("{}: {} ".format("Last update date",
+ pretty_timestamp(dt)))
+ txts.append("{}: {} ".format("Retries",
+ str(item.tries)))
+ txts.append("{}: {} ".format("Error message",
+ item.error))
+
+ text_area = QtWidgets.QTextEdit("\n\n".join(txts))
+ text_area.setReadOnly(True)
+ layout.addWidget(text_area)
+
+ if no_errors:
+ text_area = QtWidgets.QTextEdit()
+ text_area.setText("
".join(warnings)
+
+ dialog = QtWidgets.QMessageBox(self)
+ dialog.setText(msg)
+ dialog.setIcon(QtWidgets.QMessageBox.Warning)
+ dialog.exec_()
+
+ self.reset()
+
except Exception as exc:
formatted_traceback = traceback.format_exception(*sys.exc_info())
dialog = QtWidgets.QMessageBox(self)
diff --git a/openpype/tools/settings/settings/widgets/multiselection_combobox.py b/openpype/tools/settings/settings/widgets/multiselection_combobox.py
index da9cdd75cf..30ecb7b84b 100644
--- a/openpype/tools/settings/settings/widgets/multiselection_combobox.py
+++ b/openpype/tools/settings/settings/widgets/multiselection_combobox.py
@@ -262,7 +262,10 @@ class MultiSelectionComboBox(QtWidgets.QComboBox):
self.lines[line] = [item]
line += 1
else:
- self.lines[line].append(item)
+ if line in self.lines:
+ self.lines[line].append(item)
+ else:
+ self.lines[line] = [item]
left_x = left_x + width + self.item_spacing
self.update()
diff --git a/openpype/vendor/python/python_2/dns/__init__.py b/openpype/vendor/python/python_2/dns/__init__.py
new file mode 100644
index 0000000000..c1ce8e6061
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/__init__.py
@@ -0,0 +1,56 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009, 2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""dnspython DNS toolkit"""
+
+__all__ = [
+ 'dnssec',
+ 'e164',
+ 'edns',
+ 'entropy',
+ 'exception',
+ 'flags',
+ 'hash',
+ 'inet',
+ 'ipv4',
+ 'ipv6',
+ 'message',
+ 'name',
+ 'namedict',
+ 'node',
+ 'opcode',
+ 'query',
+ 'rcode',
+ 'rdata',
+ 'rdataclass',
+ 'rdataset',
+ 'rdatatype',
+ 'renderer',
+ 'resolver',
+ 'reversename',
+ 'rrset',
+ 'set',
+ 'tokenizer',
+ 'tsig',
+ 'tsigkeyring',
+ 'ttl',
+ 'rdtypes',
+ 'update',
+ 'version',
+ 'wiredata',
+ 'zone',
+]
diff --git a/openpype/vendor/python/python_2/dns/_compat.py b/openpype/vendor/python/python_2/dns/_compat.py
new file mode 100644
index 0000000000..ca0931c2b5
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/_compat.py
@@ -0,0 +1,59 @@
+import sys
+import decimal
+from decimal import Context
+
+PY3 = sys.version_info[0] == 3
+PY2 = sys.version_info[0] == 2
+
+
+if PY3:
+ long = int
+ xrange = range
+else:
+ long = long # pylint: disable=long-builtin
+ xrange = xrange # pylint: disable=xrange-builtin
+
+# unicode / binary types
+if PY3:
+ text_type = str
+ binary_type = bytes
+ string_types = (str,)
+ unichr = chr
+ def maybe_decode(x):
+ return x.decode()
+ def maybe_encode(x):
+ return x.encode()
+ def maybe_chr(x):
+ return x
+ def maybe_ord(x):
+ return x
+else:
+ text_type = unicode # pylint: disable=unicode-builtin, undefined-variable
+ binary_type = str
+ string_types = (
+ basestring, # pylint: disable=basestring-builtin, undefined-variable
+ )
+ unichr = unichr # pylint: disable=unichr-builtin
+ def maybe_decode(x):
+ return x
+ def maybe_encode(x):
+ return x
+ def maybe_chr(x):
+ return chr(x)
+ def maybe_ord(x):
+ return ord(x)
+
+
+def round_py2_compat(what):
+ """
+ Python 2 and Python 3 use different rounding strategies in round(). This
+ function ensures that results are python2/3 compatible and backward
+ compatible with previous py2 releases
+ :param what: float
+ :return: rounded long
+ """
+ d = Context(
+ prec=len(str(long(what))), # round to integer with max precision
+ rounding=decimal.ROUND_HALF_UP
+ ).create_decimal(str(what)) # str(): python 2.6 compat
+ return long(d)
diff --git a/openpype/vendor/python/python_2/dns/dnssec.py b/openpype/vendor/python/python_2/dns/dnssec.py
new file mode 100644
index 0000000000..35da6b5a81
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/dnssec.py
@@ -0,0 +1,519 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Common DNSSEC-related functions and constants."""
+
+from io import BytesIO
+import struct
+import time
+
+import dns.exception
+import dns.name
+import dns.node
+import dns.rdataset
+import dns.rdata
+import dns.rdatatype
+import dns.rdataclass
+from ._compat import string_types
+
+
+class UnsupportedAlgorithm(dns.exception.DNSException):
+ """The DNSSEC algorithm is not supported."""
+
+
+class ValidationFailure(dns.exception.DNSException):
+ """The DNSSEC signature is invalid."""
+
+
+#: RSAMD5
+RSAMD5 = 1
+#: DH
+DH = 2
+#: DSA
+DSA = 3
+#: ECC
+ECC = 4
+#: RSASHA1
+RSASHA1 = 5
+#: DSANSEC3SHA1
+DSANSEC3SHA1 = 6
+#: RSASHA1NSEC3SHA1
+RSASHA1NSEC3SHA1 = 7
+#: RSASHA256
+RSASHA256 = 8
+#: RSASHA512
+RSASHA512 = 10
+#: ECDSAP256SHA256
+ECDSAP256SHA256 = 13
+#: ECDSAP384SHA384
+ECDSAP384SHA384 = 14
+#: INDIRECT
+INDIRECT = 252
+#: PRIVATEDNS
+PRIVATEDNS = 253
+#: PRIVATEOID
+PRIVATEOID = 254
+
+_algorithm_by_text = {
+ 'RSAMD5': RSAMD5,
+ 'DH': DH,
+ 'DSA': DSA,
+ 'ECC': ECC,
+ 'RSASHA1': RSASHA1,
+ 'DSANSEC3SHA1': DSANSEC3SHA1,
+ 'RSASHA1NSEC3SHA1': RSASHA1NSEC3SHA1,
+ 'RSASHA256': RSASHA256,
+ 'RSASHA512': RSASHA512,
+ 'INDIRECT': INDIRECT,
+ 'ECDSAP256SHA256': ECDSAP256SHA256,
+ 'ECDSAP384SHA384': ECDSAP384SHA384,
+ 'PRIVATEDNS': PRIVATEDNS,
+ 'PRIVATEOID': PRIVATEOID,
+}
+
+# We construct the inverse mapping programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mapping not to be true inverse.
+
+_algorithm_by_value = {y: x for x, y in _algorithm_by_text.items()}
+
+
+def algorithm_from_text(text):
+ """Convert text into a DNSSEC algorithm value.
+
+ Returns an ``int``.
+ """
+
+ value = _algorithm_by_text.get(text.upper())
+ if value is None:
+ value = int(text)
+ return value
+
+
+def algorithm_to_text(value):
+ """Convert a DNSSEC algorithm value to text
+
+ Returns a ``str``.
+ """
+
+ text = _algorithm_by_value.get(value)
+ if text is None:
+ text = str(value)
+ return text
+
+
+def _to_rdata(record, origin):
+ s = BytesIO()
+ record.to_wire(s, origin=origin)
+ return s.getvalue()
+
+
+def key_id(key, origin=None):
+ """Return the key id (a 16-bit number) for the specified key.
+
+ Note the *origin* parameter of this function is historical and
+ is not needed.
+
+ Returns an ``int`` between 0 and 65535.
+ """
+
+ rdata = _to_rdata(key, origin)
+ rdata = bytearray(rdata)
+ if key.algorithm == RSAMD5:
+ return (rdata[-3] << 8) + rdata[-2]
+ else:
+ total = 0
+ for i in range(len(rdata) // 2):
+ total += (rdata[2 * i] << 8) + \
+ rdata[2 * i + 1]
+ if len(rdata) % 2 != 0:
+ total += rdata[len(rdata) - 1] << 8
+ total += ((total >> 16) & 0xffff)
+ return total & 0xffff
+
+
+def make_ds(name, key, algorithm, origin=None):
+ """Create a DS record for a DNSSEC key.
+
+ *name* is the owner name of the DS record.
+
+ *key* is a ``dns.rdtypes.ANY.DNSKEY``.
+
+ *algorithm* is a string describing which hash algorithm to use. The
+ currently supported hashes are "SHA1" and "SHA256". Case does not
+ matter for these strings.
+
+ *origin* is a ``dns.name.Name`` and will be used as the origin
+ if *key* is a relative name.
+
+ Returns a ``dns.rdtypes.ANY.DS``.
+ """
+
+ if algorithm.upper() == 'SHA1':
+ dsalg = 1
+ hash = SHA1.new()
+ elif algorithm.upper() == 'SHA256':
+ dsalg = 2
+ hash = SHA256.new()
+ else:
+ raise UnsupportedAlgorithm('unsupported algorithm "%s"' % algorithm)
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, origin)
+ hash.update(name.canonicalize().to_wire())
+ hash.update(_to_rdata(key, origin))
+ digest = hash.digest()
+
+ dsrdata = struct.pack("!HBB", key_id(key), key.algorithm, dsalg) + digest
+ return dns.rdata.from_wire(dns.rdataclass.IN, dns.rdatatype.DS, dsrdata, 0,
+ len(dsrdata))
+
+
+def _find_candidate_keys(keys, rrsig):
+ candidate_keys = []
+ value = keys.get(rrsig.signer)
+ if value is None:
+ return None
+ if isinstance(value, dns.node.Node):
+ try:
+ rdataset = value.find_rdataset(dns.rdataclass.IN,
+ dns.rdatatype.DNSKEY)
+ except KeyError:
+ return None
+ else:
+ rdataset = value
+ for rdata in rdataset:
+ if rdata.algorithm == rrsig.algorithm and \
+ key_id(rdata) == rrsig.key_tag:
+ candidate_keys.append(rdata)
+ return candidate_keys
+
+
+def _is_rsa(algorithm):
+ return algorithm in (RSAMD5, RSASHA1,
+ RSASHA1NSEC3SHA1, RSASHA256,
+ RSASHA512)
+
+
+def _is_dsa(algorithm):
+ return algorithm in (DSA, DSANSEC3SHA1)
+
+
+def _is_ecdsa(algorithm):
+ return _have_ecdsa and (algorithm in (ECDSAP256SHA256, ECDSAP384SHA384))
+
+
+def _is_md5(algorithm):
+ return algorithm == RSAMD5
+
+
+def _is_sha1(algorithm):
+ return algorithm in (DSA, RSASHA1,
+ DSANSEC3SHA1, RSASHA1NSEC3SHA1)
+
+
+def _is_sha256(algorithm):
+ return algorithm in (RSASHA256, ECDSAP256SHA256)
+
+
+def _is_sha384(algorithm):
+ return algorithm == ECDSAP384SHA384
+
+
+def _is_sha512(algorithm):
+ return algorithm == RSASHA512
+
+
+def _make_hash(algorithm):
+ if _is_md5(algorithm):
+ return MD5.new()
+ if _is_sha1(algorithm):
+ return SHA1.new()
+ if _is_sha256(algorithm):
+ return SHA256.new()
+ if _is_sha384(algorithm):
+ return SHA384.new()
+ if _is_sha512(algorithm):
+ return SHA512.new()
+ raise ValidationFailure('unknown hash for algorithm %u' % algorithm)
+
+
+def _make_algorithm_id(algorithm):
+ if _is_md5(algorithm):
+ oid = [0x2a, 0x86, 0x48, 0x86, 0xf7, 0x0d, 0x02, 0x05]
+ elif _is_sha1(algorithm):
+ oid = [0x2b, 0x0e, 0x03, 0x02, 0x1a]
+ elif _is_sha256(algorithm):
+ oid = [0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x01]
+ elif _is_sha512(algorithm):
+ oid = [0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x03]
+ else:
+ raise ValidationFailure('unknown algorithm %u' % algorithm)
+ olen = len(oid)
+ dlen = _make_hash(algorithm).digest_size
+ idbytes = [0x30] + [8 + olen + dlen] + \
+ [0x30, olen + 4] + [0x06, olen] + oid + \
+ [0x05, 0x00] + [0x04, dlen]
+ return struct.pack('!%dB' % len(idbytes), *idbytes)
+
+
+def _validate_rrsig(rrset, rrsig, keys, origin=None, now=None):
+ """Validate an RRset against a single signature rdata
+
+ The owner name of *rrsig* is assumed to be the same as the owner name
+ of *rrset*.
+
+ *rrset* is the RRset to validate. It can be a ``dns.rrset.RRset`` or
+ a ``(dns.name.Name, dns.rdataset.Rdataset)`` tuple.
+
+ *rrsig* is a ``dns.rdata.Rdata``, the signature to validate.
+
+ *keys* is the key dictionary, used to find the DNSKEY associated with
+ a given name. The dictionary is keyed by a ``dns.name.Name``, and has
+ ``dns.node.Node`` or ``dns.rdataset.Rdataset`` values.
+
+ *origin* is a ``dns.name.Name``, the origin to use for relative names.
+
+ *now* is an ``int``, the time to use when validating the signatures,
+ in seconds since the UNIX epoch. The default is the current time.
+ """
+
+ if isinstance(origin, string_types):
+ origin = dns.name.from_text(origin, dns.name.root)
+
+ candidate_keys = _find_candidate_keys(keys, rrsig)
+ if candidate_keys is None:
+ raise ValidationFailure('unknown key')
+
+ for candidate_key in candidate_keys:
+ # For convenience, allow the rrset to be specified as a (name,
+ # rdataset) tuple as well as a proper rrset
+ if isinstance(rrset, tuple):
+ rrname = rrset[0]
+ rdataset = rrset[1]
+ else:
+ rrname = rrset.name
+ rdataset = rrset
+
+ if now is None:
+ now = time.time()
+ if rrsig.expiration < now:
+ raise ValidationFailure('expired')
+ if rrsig.inception > now:
+ raise ValidationFailure('not yet valid')
+
+ hash = _make_hash(rrsig.algorithm)
+
+ if _is_rsa(rrsig.algorithm):
+ keyptr = candidate_key.key
+ (bytes_,) = struct.unpack('!B', keyptr[0:1])
+ keyptr = keyptr[1:]
+ if bytes_ == 0:
+ (bytes_,) = struct.unpack('!H', keyptr[0:2])
+ keyptr = keyptr[2:]
+ rsa_e = keyptr[0:bytes_]
+ rsa_n = keyptr[bytes_:]
+ try:
+ pubkey = CryptoRSA.construct(
+ (number.bytes_to_long(rsa_n),
+ number.bytes_to_long(rsa_e)))
+ except ValueError:
+ raise ValidationFailure('invalid public key')
+ sig = rrsig.signature
+ elif _is_dsa(rrsig.algorithm):
+ keyptr = candidate_key.key
+ (t,) = struct.unpack('!B', keyptr[0:1])
+ keyptr = keyptr[1:]
+ octets = 64 + t * 8
+ dsa_q = keyptr[0:20]
+ keyptr = keyptr[20:]
+ dsa_p = keyptr[0:octets]
+ keyptr = keyptr[octets:]
+ dsa_g = keyptr[0:octets]
+ keyptr = keyptr[octets:]
+ dsa_y = keyptr[0:octets]
+ pubkey = CryptoDSA.construct(
+ (number.bytes_to_long(dsa_y),
+ number.bytes_to_long(dsa_g),
+ number.bytes_to_long(dsa_p),
+ number.bytes_to_long(dsa_q)))
+ sig = rrsig.signature[1:]
+ elif _is_ecdsa(rrsig.algorithm):
+ # use ecdsa for NIST-384p -- not currently supported by pycryptodome
+
+ keyptr = candidate_key.key
+
+ if rrsig.algorithm == ECDSAP256SHA256:
+ curve = ecdsa.curves.NIST256p
+ key_len = 32
+ elif rrsig.algorithm == ECDSAP384SHA384:
+ curve = ecdsa.curves.NIST384p
+ key_len = 48
+
+ x = number.bytes_to_long(keyptr[0:key_len])
+ y = number.bytes_to_long(keyptr[key_len:key_len * 2])
+ if not ecdsa.ecdsa.point_is_valid(curve.generator, x, y):
+ raise ValidationFailure('invalid ECDSA key')
+ point = ecdsa.ellipticcurve.Point(curve.curve, x, y, curve.order)
+ verifying_key = ecdsa.keys.VerifyingKey.from_public_point(point,
+ curve)
+ pubkey = ECKeyWrapper(verifying_key, key_len)
+ r = rrsig.signature[:key_len]
+ s = rrsig.signature[key_len:]
+ sig = ecdsa.ecdsa.Signature(number.bytes_to_long(r),
+ number.bytes_to_long(s))
+
+ else:
+ raise ValidationFailure('unknown algorithm %u' % rrsig.algorithm)
+
+ hash.update(_to_rdata(rrsig, origin)[:18])
+ hash.update(rrsig.signer.to_digestable(origin))
+
+ if rrsig.labels < len(rrname) - 1:
+ suffix = rrname.split(rrsig.labels + 1)[1]
+ rrname = dns.name.from_text('*', suffix)
+ rrnamebuf = rrname.to_digestable(origin)
+ rrfixed = struct.pack('!HHI', rdataset.rdtype, rdataset.rdclass,
+ rrsig.original_ttl)
+ rrlist = sorted(rdataset)
+ for rr in rrlist:
+ hash.update(rrnamebuf)
+ hash.update(rrfixed)
+ rrdata = rr.to_digestable(origin)
+ rrlen = struct.pack('!H', len(rrdata))
+ hash.update(rrlen)
+ hash.update(rrdata)
+
+ try:
+ if _is_rsa(rrsig.algorithm):
+ verifier = pkcs1_15.new(pubkey)
+ # will raise ValueError if verify fails:
+ verifier.verify(hash, sig)
+ elif _is_dsa(rrsig.algorithm):
+ verifier = DSS.new(pubkey, 'fips-186-3')
+ verifier.verify(hash, sig)
+ elif _is_ecdsa(rrsig.algorithm):
+ digest = hash.digest()
+ if not pubkey.verify(digest, sig):
+ raise ValueError
+ else:
+ # Raise here for code clarity; this won't actually ever happen
+ # since if the algorithm is really unknown we'd already have
+ # raised an exception above
+ raise ValidationFailure('unknown algorithm %u' % rrsig.algorithm)
+ # If we got here, we successfully verified so we can return without error
+ return
+ except ValueError:
+ # this happens on an individual validation failure
+ continue
+ # nothing verified -- raise failure:
+ raise ValidationFailure('verify failure')
+
+
+def _validate(rrset, rrsigset, keys, origin=None, now=None):
+ """Validate an RRset.
+
+ *rrset* is the RRset to validate. It can be a ``dns.rrset.RRset`` or
+ a ``(dns.name.Name, dns.rdataset.Rdataset)`` tuple.
+
+ *rrsigset* is the signature RRset to be validated. It can be a
+ ``dns.rrset.RRset`` or a ``(dns.name.Name, dns.rdataset.Rdataset)`` tuple.
+
+ *keys* is the key dictionary, used to find the DNSKEY associated with
+ a given name. The dictionary is keyed by a ``dns.name.Name``, and has
+ ``dns.node.Node`` or ``dns.rdataset.Rdataset`` values.
+
+ *origin* is a ``dns.name.Name``, the origin to use for relative names.
+
+ *now* is an ``int``, the time to use when validating the signatures,
+ in seconds since the UNIX epoch. The default is the current time.
+ """
+
+ if isinstance(origin, string_types):
+ origin = dns.name.from_text(origin, dns.name.root)
+
+ if isinstance(rrset, tuple):
+ rrname = rrset[0]
+ else:
+ rrname = rrset.name
+
+ if isinstance(rrsigset, tuple):
+ rrsigname = rrsigset[0]
+ rrsigrdataset = rrsigset[1]
+ else:
+ rrsigname = rrsigset.name
+ rrsigrdataset = rrsigset
+
+ rrname = rrname.choose_relativity(origin)
+ rrsigname = rrsigname.choose_relativity(origin)
+ if rrname != rrsigname:
+ raise ValidationFailure("owner names do not match")
+
+ for rrsig in rrsigrdataset:
+ try:
+ _validate_rrsig(rrset, rrsig, keys, origin, now)
+ return
+ except ValidationFailure:
+ pass
+ raise ValidationFailure("no RRSIGs validated")
+
+
+def _need_pycrypto(*args, **kwargs):
+ raise NotImplementedError("DNSSEC validation requires pycryptodome/pycryptodomex")
+
+
+try:
+ try:
+ # test we're using pycryptodome, not pycrypto (which misses SHA1 for example)
+ from Crypto.Hash import MD5, SHA1, SHA256, SHA384, SHA512
+ from Crypto.PublicKey import RSA as CryptoRSA, DSA as CryptoDSA
+ from Crypto.Signature import pkcs1_15, DSS
+ from Crypto.Util import number
+ except ImportError:
+ from Cryptodome.Hash import MD5, SHA1, SHA256, SHA384, SHA512
+ from Cryptodome.PublicKey import RSA as CryptoRSA, DSA as CryptoDSA
+ from Cryptodome.Signature import pkcs1_15, DSS
+ from Cryptodome.Util import number
+except ImportError:
+ validate = _need_pycrypto
+ validate_rrsig = _need_pycrypto
+ _have_pycrypto = False
+ _have_ecdsa = False
+else:
+ validate = _validate
+ validate_rrsig = _validate_rrsig
+ _have_pycrypto = True
+
+ try:
+ import ecdsa
+ import ecdsa.ecdsa
+ import ecdsa.ellipticcurve
+ import ecdsa.keys
+ except ImportError:
+ _have_ecdsa = False
+ else:
+ _have_ecdsa = True
+
+ class ECKeyWrapper(object):
+
+ def __init__(self, key, key_len):
+ self.key = key
+ self.key_len = key_len
+
+ def verify(self, digest, sig):
+ diglong = number.bytes_to_long(digest)
+ return self.key.pubkey.verifies(diglong, sig)
diff --git a/openpype/vendor/python/python_2/dns/e164.py b/openpype/vendor/python/python_2/dns/e164.py
new file mode 100644
index 0000000000..758c47a784
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/e164.py
@@ -0,0 +1,105 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2006-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS E.164 helpers."""
+
+import dns.exception
+import dns.name
+import dns.resolver
+from ._compat import string_types, maybe_decode
+
+#: The public E.164 domain.
+public_enum_domain = dns.name.from_text('e164.arpa.')
+
+
+def from_e164(text, origin=public_enum_domain):
+ """Convert an E.164 number in textual form into a Name object whose
+ value is the ENUM domain name for that number.
+
+ Non-digits in the text are ignored, i.e. "16505551212",
+ "+1.650.555.1212" and "1 (650) 555-1212" are all the same.
+
+ *text*, a ``text``, is an E.164 number in textual form.
+
+ *origin*, a ``dns.name.Name``, the domain in which the number
+ should be constructed. The default is ``e164.arpa.``.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ parts = [d for d in text if d.isdigit()]
+ parts.reverse()
+ return dns.name.from_text('.'.join(parts), origin=origin)
+
+
+def to_e164(name, origin=public_enum_domain, want_plus_prefix=True):
+ """Convert an ENUM domain name into an E.164 number.
+
+ Note that dnspython does not have any information about preferred
+ number formats within national numbering plans, so all numbers are
+ emitted as a simple string of digits, prefixed by a '+' (unless
+ *want_plus_prefix* is ``False``).
+
+ *name* is a ``dns.name.Name``, the ENUM domain name.
+
+ *origin* is a ``dns.name.Name``, a domain containing the ENUM
+ domain name. The name is relativized to this domain before being
+ converted to text. If ``None``, no relativization is done.
+
+ *want_plus_prefix* is a ``bool``. If True, add a '+' to the beginning of
+ the returned number.
+
+ Returns a ``text``.
+
+ """
+ if origin is not None:
+ name = name.relativize(origin)
+ dlabels = [d for d in name.labels if d.isdigit() and len(d) == 1]
+ if len(dlabels) != len(name.labels):
+ raise dns.exception.SyntaxError('non-digit labels in ENUM domain name')
+ dlabels.reverse()
+ text = b''.join(dlabels)
+ if want_plus_prefix:
+ text = b'+' + text
+ return maybe_decode(text)
+
+
+def query(number, domains, resolver=None):
+ """Look for NAPTR RRs for the specified number in the specified domains.
+
+ e.g. lookup('16505551212', ['e164.dnspython.org.', 'e164.arpa.'])
+
+ *number*, a ``text`` is the number to look for.
+
+ *domains* is an iterable containing ``dns.name.Name`` values.
+
+ *resolver*, a ``dns.resolver.Resolver``, is the resolver to use. If
+ ``None``, the default resolver is used.
+ """
+
+ if resolver is None:
+ resolver = dns.resolver.get_default_resolver()
+ e_nx = dns.resolver.NXDOMAIN()
+ for domain in domains:
+ if isinstance(domain, string_types):
+ domain = dns.name.from_text(domain)
+ qname = dns.e164.from_e164(number, domain)
+ try:
+ return resolver.query(qname, 'NAPTR')
+ except dns.resolver.NXDOMAIN as e:
+ e_nx += e
+ raise e_nx
diff --git a/openpype/vendor/python/python_2/dns/edns.py b/openpype/vendor/python/python_2/dns/edns.py
new file mode 100644
index 0000000000..5660f7bb7a
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/edns.py
@@ -0,0 +1,269 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2009-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""EDNS Options"""
+
+from __future__ import absolute_import
+
+import math
+import struct
+
+import dns.inet
+
+#: NSID
+NSID = 3
+#: DAU
+DAU = 5
+#: DHU
+DHU = 6
+#: N3U
+N3U = 7
+#: ECS (client-subnet)
+ECS = 8
+#: EXPIRE
+EXPIRE = 9
+#: COOKIE
+COOKIE = 10
+#: KEEPALIVE
+KEEPALIVE = 11
+#: PADDING
+PADDING = 12
+#: CHAIN
+CHAIN = 13
+
+class Option(object):
+
+ """Base class for all EDNS option types."""
+
+ def __init__(self, otype):
+ """Initialize an option.
+
+ *otype*, an ``int``, is the option type.
+ """
+ self.otype = otype
+
+ def to_wire(self, file):
+ """Convert an option to wire format.
+ """
+ raise NotImplementedError
+
+ @classmethod
+ def from_wire(cls, otype, wire, current, olen):
+ """Build an EDNS option object from wire format.
+
+ *otype*, an ``int``, is the option type.
+
+ *wire*, a ``binary``, is the wire-format message.
+
+ *current*, an ``int``, is the offset in *wire* of the beginning
+ of the rdata.
+
+ *olen*, an ``int``, is the length of the wire-format option data
+
+ Returns a ``dns.edns.Option``.
+ """
+
+ raise NotImplementedError
+
+ def _cmp(self, other):
+ """Compare an EDNS option with another option of the same type.
+
+ Returns < 0 if < *other*, 0 if == *other*, and > 0 if > *other*.
+ """
+ raise NotImplementedError
+
+ def __eq__(self, other):
+ if not isinstance(other, Option):
+ return False
+ if self.otype != other.otype:
+ return False
+ return self._cmp(other) == 0
+
+ def __ne__(self, other):
+ if not isinstance(other, Option):
+ return False
+ if self.otype != other.otype:
+ return False
+ return self._cmp(other) != 0
+
+ def __lt__(self, other):
+ if not isinstance(other, Option) or \
+ self.otype != other.otype:
+ return NotImplemented
+ return self._cmp(other) < 0
+
+ def __le__(self, other):
+ if not isinstance(other, Option) or \
+ self.otype != other.otype:
+ return NotImplemented
+ return self._cmp(other) <= 0
+
+ def __ge__(self, other):
+ if not isinstance(other, Option) or \
+ self.otype != other.otype:
+ return NotImplemented
+ return self._cmp(other) >= 0
+
+ def __gt__(self, other):
+ if not isinstance(other, Option) or \
+ self.otype != other.otype:
+ return NotImplemented
+ return self._cmp(other) > 0
+
+
+class GenericOption(Option):
+
+ """Generic Option Class
+
+ This class is used for EDNS option types for which we have no better
+ implementation.
+ """
+
+ def __init__(self, otype, data):
+ super(GenericOption, self).__init__(otype)
+ self.data = data
+
+ def to_wire(self, file):
+ file.write(self.data)
+
+ def to_text(self):
+ return "Generic %d" % self.otype
+
+ @classmethod
+ def from_wire(cls, otype, wire, current, olen):
+ return cls(otype, wire[current: current + olen])
+
+ def _cmp(self, other):
+ if self.data == other.data:
+ return 0
+ if self.data > other.data:
+ return 1
+ return -1
+
+
+class ECSOption(Option):
+ """EDNS Client Subnet (ECS, RFC7871)"""
+
+ def __init__(self, address, srclen=None, scopelen=0):
+ """*address*, a ``text``, is the client address information.
+
+ *srclen*, an ``int``, the source prefix length, which is the
+ leftmost number of bits of the address to be used for the
+ lookup. The default is 24 for IPv4 and 56 for IPv6.
+
+ *scopelen*, an ``int``, the scope prefix length. This value
+ must be 0 in queries, and should be set in responses.
+ """
+
+ super(ECSOption, self).__init__(ECS)
+ af = dns.inet.af_for_address(address)
+
+ if af == dns.inet.AF_INET6:
+ self.family = 2
+ if srclen is None:
+ srclen = 56
+ elif af == dns.inet.AF_INET:
+ self.family = 1
+ if srclen is None:
+ srclen = 24
+ else:
+ raise ValueError('Bad ip family')
+
+ self.address = address
+ self.srclen = srclen
+ self.scopelen = scopelen
+
+ addrdata = dns.inet.inet_pton(af, address)
+ nbytes = int(math.ceil(srclen/8.0))
+
+ # Truncate to srclen and pad to the end of the last octet needed
+ # See RFC section 6
+ self.addrdata = addrdata[:nbytes]
+ nbits = srclen % 8
+ if nbits != 0:
+ last = struct.pack('B', ord(self.addrdata[-1:]) & (0xff << nbits))
+ self.addrdata = self.addrdata[:-1] + last
+
+ def to_text(self):
+ return "ECS {}/{} scope/{}".format(self.address, self.srclen,
+ self.scopelen)
+
+ def to_wire(self, file):
+ file.write(struct.pack('!H', self.family))
+ file.write(struct.pack('!BB', self.srclen, self.scopelen))
+ file.write(self.addrdata)
+
+ @classmethod
+ def from_wire(cls, otype, wire, cur, olen):
+ family, src, scope = struct.unpack('!HBB', wire[cur:cur+4])
+ cur += 4
+
+ addrlen = int(math.ceil(src/8.0))
+
+ if family == 1:
+ af = dns.inet.AF_INET
+ pad = 4 - addrlen
+ elif family == 2:
+ af = dns.inet.AF_INET6
+ pad = 16 - addrlen
+ else:
+ raise ValueError('unsupported family')
+
+ addr = dns.inet.inet_ntop(af, wire[cur:cur+addrlen] + b'\x00' * pad)
+ return cls(addr, src, scope)
+
+ def _cmp(self, other):
+ if self.addrdata == other.addrdata:
+ return 0
+ if self.addrdata > other.addrdata:
+ return 1
+ return -1
+
+_type_to_class = {
+ ECS: ECSOption
+}
+
+def get_option_class(otype):
+ """Return the class for the specified option type.
+
+ The GenericOption class is used if a more specific class is not
+ known.
+ """
+
+ cls = _type_to_class.get(otype)
+ if cls is None:
+ cls = GenericOption
+ return cls
+
+
+def option_from_wire(otype, wire, current, olen):
+ """Build an EDNS option object from wire format.
+
+ *otype*, an ``int``, is the option type.
+
+ *wire*, a ``binary``, is the wire-format message.
+
+ *current*, an ``int``, is the offset in *wire* of the beginning
+ of the rdata.
+
+ *olen*, an ``int``, is the length of the wire-format option data
+
+ Returns an instance of a subclass of ``dns.edns.Option``.
+ """
+
+ cls = get_option_class(otype)
+ return cls.from_wire(otype, wire, current, olen)
diff --git a/openpype/vendor/python/python_2/dns/entropy.py b/openpype/vendor/python/python_2/dns/entropy.py
new file mode 100644
index 0000000000..00c6a4b389
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/entropy.py
@@ -0,0 +1,148 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2009-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import os
+import random
+import time
+from ._compat import long, binary_type
+try:
+ import threading as _threading
+except ImportError:
+ import dummy_threading as _threading
+
+
+class EntropyPool(object):
+
+ # This is an entropy pool for Python implementations that do not
+ # have a working SystemRandom. I'm not sure there are any, but
+ # leaving this code doesn't hurt anything as the library code
+ # is used if present.
+
+ def __init__(self, seed=None):
+ self.pool_index = 0
+ self.digest = None
+ self.next_byte = 0
+ self.lock = _threading.Lock()
+ try:
+ import hashlib
+ self.hash = hashlib.sha1()
+ self.hash_len = 20
+ except ImportError:
+ try:
+ import sha
+ self.hash = sha.new()
+ self.hash_len = 20
+ except ImportError:
+ import md5 # pylint: disable=import-error
+ self.hash = md5.new()
+ self.hash_len = 16
+ self.pool = bytearray(b'\0' * self.hash_len)
+ if seed is not None:
+ self.stir(bytearray(seed))
+ self.seeded = True
+ self.seed_pid = os.getpid()
+ else:
+ self.seeded = False
+ self.seed_pid = 0
+
+ def stir(self, entropy, already_locked=False):
+ if not already_locked:
+ self.lock.acquire()
+ try:
+ for c in entropy:
+ if self.pool_index == self.hash_len:
+ self.pool_index = 0
+ b = c & 0xff
+ self.pool[self.pool_index] ^= b
+ self.pool_index += 1
+ finally:
+ if not already_locked:
+ self.lock.release()
+
+ def _maybe_seed(self):
+ if not self.seeded or self.seed_pid != os.getpid():
+ try:
+ seed = os.urandom(16)
+ except Exception:
+ try:
+ r = open('/dev/urandom', 'rb', 0)
+ try:
+ seed = r.read(16)
+ finally:
+ r.close()
+ except Exception:
+ seed = str(time.time())
+ self.seeded = True
+ self.seed_pid = os.getpid()
+ self.digest = None
+ seed = bytearray(seed)
+ self.stir(seed, True)
+
+ def random_8(self):
+ self.lock.acquire()
+ try:
+ self._maybe_seed()
+ if self.digest is None or self.next_byte == self.hash_len:
+ self.hash.update(binary_type(self.pool))
+ self.digest = bytearray(self.hash.digest())
+ self.stir(self.digest, True)
+ self.next_byte = 0
+ value = self.digest[self.next_byte]
+ self.next_byte += 1
+ finally:
+ self.lock.release()
+ return value
+
+ def random_16(self):
+ return self.random_8() * 256 + self.random_8()
+
+ def random_32(self):
+ return self.random_16() * 65536 + self.random_16()
+
+ def random_between(self, first, last):
+ size = last - first + 1
+ if size > long(4294967296):
+ raise ValueError('too big')
+ if size > 65536:
+ rand = self.random_32
+ max = long(4294967295)
+ elif size > 256:
+ rand = self.random_16
+ max = 65535
+ else:
+ rand = self.random_8
+ max = 255
+ return first + size * rand() // (max + 1)
+
+pool = EntropyPool()
+
+try:
+ system_random = random.SystemRandom()
+except Exception:
+ system_random = None
+
+def random_16():
+ if system_random is not None:
+ return system_random.randrange(0, 65536)
+ else:
+ return pool.random_16()
+
+def between(first, last):
+ if system_random is not None:
+ return system_random.randrange(first, last + 1)
+ else:
+ return pool.random_between(first, last)
diff --git a/openpype/vendor/python/python_2/dns/exception.py b/openpype/vendor/python/python_2/dns/exception.py
new file mode 100644
index 0000000000..71ff04f148
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/exception.py
@@ -0,0 +1,128 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Common DNS Exceptions.
+
+Dnspython modules may also define their own exceptions, which will
+always be subclasses of ``DNSException``.
+"""
+
+class DNSException(Exception):
+ """Abstract base class shared by all dnspython exceptions.
+
+ It supports two basic modes of operation:
+
+ a) Old/compatible mode is used if ``__init__`` was called with
+ empty *kwargs*. In compatible mode all *args* are passed
+ to the standard Python Exception class as before and all *args* are
+ printed by the standard ``__str__`` implementation. Class variable
+ ``msg`` (or doc string if ``msg`` is ``None``) is returned from ``str()``
+ if *args* is empty.
+
+ b) New/parametrized mode is used if ``__init__`` was called with
+ non-empty *kwargs*.
+ In the new mode *args* must be empty and all kwargs must match
+ those set in class variable ``supp_kwargs``. All kwargs are stored inside
+ ``self.kwargs`` and used in a new ``__str__`` implementation to construct
+ a formatted message based on the ``fmt`` class variable, a ``string``.
+
+ In the simplest case it is enough to override the ``supp_kwargs``
+ and ``fmt`` class variables to get nice parametrized messages.
+ """
+
+ msg = None # non-parametrized message
+ supp_kwargs = set() # accepted parameters for _fmt_kwargs (sanity check)
+ fmt = None # message parametrized with results from _fmt_kwargs
+
+ def __init__(self, *args, **kwargs):
+ self._check_params(*args, **kwargs)
+ if kwargs:
+ self.kwargs = self._check_kwargs(**kwargs)
+ self.msg = str(self)
+ else:
+ self.kwargs = dict() # defined but empty for old mode exceptions
+ if self.msg is None:
+ # doc string is better implicit message than empty string
+ self.msg = self.__doc__
+ if args:
+ super(DNSException, self).__init__(*args)
+ else:
+ super(DNSException, self).__init__(self.msg)
+
+ def _check_params(self, *args, **kwargs):
+ """Old exceptions supported only args and not kwargs.
+
+ For sanity we do not allow to mix old and new behavior."""
+ if args or kwargs:
+ assert bool(args) != bool(kwargs), \
+ 'keyword arguments are mutually exclusive with positional args'
+
+ def _check_kwargs(self, **kwargs):
+ if kwargs:
+ assert set(kwargs.keys()) == self.supp_kwargs, \
+ 'following set of keyword args is required: %s' % (
+ self.supp_kwargs)
+ return kwargs
+
+ def _fmt_kwargs(self, **kwargs):
+ """Format kwargs before printing them.
+
+ Resulting dictionary has to have keys necessary for str.format call
+ on fmt class variable.
+ """
+ fmtargs = {}
+ for kw, data in kwargs.items():
+ if isinstance(data, (list, set)):
+ # convert list of to list of str()
+ fmtargs[kw] = list(map(str, data))
+ if len(fmtargs[kw]) == 1:
+ # remove list brackets [] from single-item lists
+ fmtargs[kw] = fmtargs[kw].pop()
+ else:
+ fmtargs[kw] = data
+ return fmtargs
+
+ def __str__(self):
+ if self.kwargs and self.fmt:
+ # provide custom message constructed from keyword arguments
+ fmtargs = self._fmt_kwargs(**self.kwargs)
+ return self.fmt.format(**fmtargs)
+ else:
+ # print *args directly in the same way as old DNSException
+ return super(DNSException, self).__str__()
+
+
+class FormError(DNSException):
+ """DNS message is malformed."""
+
+
+class SyntaxError(DNSException):
+ """Text input is malformed."""
+
+
+class UnexpectedEnd(SyntaxError):
+ """Text input ended unexpectedly."""
+
+
+class TooBig(DNSException):
+ """The DNS message is too big."""
+
+
+class Timeout(DNSException):
+ """The DNS operation timed out."""
+ supp_kwargs = {'timeout'}
+ fmt = "The DNS operation timed out after {timeout} seconds"
diff --git a/openpype/vendor/python/python_2/dns/flags.py b/openpype/vendor/python/python_2/dns/flags.py
new file mode 100644
index 0000000000..0119dec71f
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/flags.py
@@ -0,0 +1,130 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Message Flags."""
+
+# Standard DNS flags
+
+#: Query Response
+QR = 0x8000
+#: Authoritative Answer
+AA = 0x0400
+#: Truncated Response
+TC = 0x0200
+#: Recursion Desired
+RD = 0x0100
+#: Recursion Available
+RA = 0x0080
+#: Authentic Data
+AD = 0x0020
+#: Checking Disabled
+CD = 0x0010
+
+# EDNS flags
+
+#: DNSSEC answer OK
+DO = 0x8000
+
+_by_text = {
+ 'QR': QR,
+ 'AA': AA,
+ 'TC': TC,
+ 'RD': RD,
+ 'RA': RA,
+ 'AD': AD,
+ 'CD': CD
+}
+
+_edns_by_text = {
+ 'DO': DO
+}
+
+
+# We construct the inverse mappings programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mappings not to be true inverses.
+
+_by_value = {y: x for x, y in _by_text.items()}
+
+_edns_by_value = {y: x for x, y in _edns_by_text.items()}
+
+
+def _order_flags(table):
+ order = list(table.items())
+ order.sort()
+ order.reverse()
+ return order
+
+_flags_order = _order_flags(_by_value)
+
+_edns_flags_order = _order_flags(_edns_by_value)
+
+
+def _from_text(text, table):
+ flags = 0
+ tokens = text.split()
+ for t in tokens:
+ flags = flags | table[t.upper()]
+ return flags
+
+
+def _to_text(flags, table, order):
+ text_flags = []
+ for k, v in order:
+ if flags & k != 0:
+ text_flags.append(v)
+ return ' '.join(text_flags)
+
+
+def from_text(text):
+ """Convert a space-separated list of flag text values into a flags
+ value.
+
+ Returns an ``int``
+ """
+
+ return _from_text(text, _by_text)
+
+
+def to_text(flags):
+ """Convert a flags value into a space-separated list of flag text
+ values.
+
+ Returns a ``text``.
+ """
+
+ return _to_text(flags, _by_value, _flags_order)
+
+
+def edns_from_text(text):
+ """Convert a space-separated list of EDNS flag text values into a EDNS
+ flags value.
+
+ Returns an ``int``
+ """
+
+ return _from_text(text, _edns_by_text)
+
+
+def edns_to_text(flags):
+ """Convert an EDNS flags value into a space-separated list of EDNS flag
+ text values.
+
+ Returns a ``text``.
+ """
+
+ return _to_text(flags, _edns_by_value, _edns_flags_order)
diff --git a/openpype/vendor/python/python_2/dns/grange.py b/openpype/vendor/python/python_2/dns/grange.py
new file mode 100644
index 0000000000..ffe8be7c46
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/grange.py
@@ -0,0 +1,69 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2012-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS GENERATE range conversion."""
+
+import dns
+
+def from_text(text):
+ """Convert the text form of a range in a ``$GENERATE`` statement to an
+ integer.
+
+ *text*, a ``str``, the textual range in ``$GENERATE`` form.
+
+ Returns a tuple of three ``int`` values ``(start, stop, step)``.
+ """
+
+ # TODO, figure out the bounds on start, stop and step.
+ step = 1
+ cur = ''
+ state = 0
+ # state 0 1 2 3 4
+ # x - y / z
+
+ if text and text[0] == '-':
+ raise dns.exception.SyntaxError("Start cannot be a negative number")
+
+ for c in text:
+ if c == '-' and state == 0:
+ start = int(cur)
+ cur = ''
+ state = 2
+ elif c == '/':
+ stop = int(cur)
+ cur = ''
+ state = 4
+ elif c.isdigit():
+ cur += c
+ else:
+ raise dns.exception.SyntaxError("Could not parse %s" % (c))
+
+ if state in (1, 3):
+ raise dns.exception.SyntaxError()
+
+ if state == 2:
+ stop = int(cur)
+
+ if state == 4:
+ step = int(cur)
+
+ assert step >= 1
+ assert start >= 0
+ assert start <= stop
+ # TODO, can start == stop?
+
+ return (start, stop, step)
diff --git a/openpype/vendor/python/python_2/dns/hash.py b/openpype/vendor/python/python_2/dns/hash.py
new file mode 100644
index 0000000000..1713e62894
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/hash.py
@@ -0,0 +1,37 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Hashing backwards compatibility wrapper"""
+
+import hashlib
+import warnings
+
+warnings.warn(
+ "dns.hash module will be removed in future versions. Please use hashlib instead.",
+ DeprecationWarning)
+
+hashes = {}
+hashes['MD5'] = hashlib.md5
+hashes['SHA1'] = hashlib.sha1
+hashes['SHA224'] = hashlib.sha224
+hashes['SHA256'] = hashlib.sha256
+hashes['SHA384'] = hashlib.sha384
+hashes['SHA512'] = hashlib.sha512
+
+
+def get(algorithm):
+ return hashes[algorithm.upper()]
diff --git a/openpype/vendor/python/python_2/dns/inet.py b/openpype/vendor/python/python_2/dns/inet.py
new file mode 100644
index 0000000000..c8d7c1b404
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/inet.py
@@ -0,0 +1,124 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Generic Internet address helper functions."""
+
+import socket
+
+import dns.ipv4
+import dns.ipv6
+
+from ._compat import maybe_ord
+
+# We assume that AF_INET is always defined.
+
+AF_INET = socket.AF_INET
+
+# AF_INET6 might not be defined in the socket module, but we need it.
+# We'll try to use the socket module's value, and if it doesn't work,
+# we'll use our own value.
+
+try:
+ AF_INET6 = socket.AF_INET6
+except AttributeError:
+ AF_INET6 = 9999
+
+
+def inet_pton(family, text):
+ """Convert the textual form of a network address into its binary form.
+
+ *family* is an ``int``, the address family.
+
+ *text* is a ``text``, the textual address.
+
+ Raises ``NotImplementedError`` if the address family specified is not
+ implemented.
+
+ Returns a ``binary``.
+ """
+
+ if family == AF_INET:
+ return dns.ipv4.inet_aton(text)
+ elif family == AF_INET6:
+ return dns.ipv6.inet_aton(text)
+ else:
+ raise NotImplementedError
+
+
+def inet_ntop(family, address):
+ """Convert the binary form of a network address into its textual form.
+
+ *family* is an ``int``, the address family.
+
+ *address* is a ``binary``, the network address in binary form.
+
+ Raises ``NotImplementedError`` if the address family specified is not
+ implemented.
+
+ Returns a ``text``.
+ """
+
+ if family == AF_INET:
+ return dns.ipv4.inet_ntoa(address)
+ elif family == AF_INET6:
+ return dns.ipv6.inet_ntoa(address)
+ else:
+ raise NotImplementedError
+
+
+def af_for_address(text):
+ """Determine the address family of a textual-form network address.
+
+ *text*, a ``text``, the textual address.
+
+ Raises ``ValueError`` if the address family cannot be determined
+ from the input.
+
+ Returns an ``int``.
+ """
+
+ try:
+ dns.ipv4.inet_aton(text)
+ return AF_INET
+ except Exception:
+ try:
+ dns.ipv6.inet_aton(text)
+ return AF_INET6
+ except:
+ raise ValueError
+
+
+def is_multicast(text):
+ """Is the textual-form network address a multicast address?
+
+ *text*, a ``text``, the textual address.
+
+ Raises ``ValueError`` if the address family cannot be determined
+ from the input.
+
+ Returns a ``bool``.
+ """
+
+ try:
+ first = maybe_ord(dns.ipv4.inet_aton(text)[0])
+ return first >= 224 and first <= 239
+ except Exception:
+ try:
+ first = maybe_ord(dns.ipv6.inet_aton(text)[0])
+ return first == 255
+ except Exception:
+ raise ValueError
diff --git a/openpype/vendor/python/python_2/dns/ipv4.py b/openpype/vendor/python/python_2/dns/ipv4.py
new file mode 100644
index 0000000000..8fc4f7dcfd
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/ipv4.py
@@ -0,0 +1,63 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""IPv4 helper functions."""
+
+import struct
+
+import dns.exception
+from ._compat import binary_type
+
+def inet_ntoa(address):
+ """Convert an IPv4 address in binary form to text form.
+
+ *address*, a ``binary``, the IPv4 address in binary form.
+
+ Returns a ``text``.
+ """
+
+ if len(address) != 4:
+ raise dns.exception.SyntaxError
+ if not isinstance(address, bytearray):
+ address = bytearray(address)
+ return ('%u.%u.%u.%u' % (address[0], address[1],
+ address[2], address[3]))
+
+def inet_aton(text):
+ """Convert an IPv4 address in text form to binary form.
+
+ *text*, a ``text``, the IPv4 address in textual form.
+
+ Returns a ``binary``.
+ """
+
+ if not isinstance(text, binary_type):
+ text = text.encode()
+ parts = text.split(b'.')
+ if len(parts) != 4:
+ raise dns.exception.SyntaxError
+ for part in parts:
+ if not part.isdigit():
+ raise dns.exception.SyntaxError
+ if len(part) > 1 and part[0] == '0':
+ # No leading zeros
+ raise dns.exception.SyntaxError
+ try:
+ bytes = [int(part) for part in parts]
+ return struct.pack('BBBB', *bytes)
+ except:
+ raise dns.exception.SyntaxError
diff --git a/openpype/vendor/python/python_2/dns/ipv6.py b/openpype/vendor/python/python_2/dns/ipv6.py
new file mode 100644
index 0000000000..128e56c8f1
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/ipv6.py
@@ -0,0 +1,181 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""IPv6 helper functions."""
+
+import re
+import binascii
+
+import dns.exception
+import dns.ipv4
+from ._compat import xrange, binary_type, maybe_decode
+
+_leading_zero = re.compile(r'0+([0-9a-f]+)')
+
+def inet_ntoa(address):
+ """Convert an IPv6 address in binary form to text form.
+
+ *address*, a ``binary``, the IPv6 address in binary form.
+
+ Raises ``ValueError`` if the address isn't 16 bytes long.
+ Returns a ``text``.
+ """
+
+ if len(address) != 16:
+ raise ValueError("IPv6 addresses are 16 bytes long")
+ hex = binascii.hexlify(address)
+ chunks = []
+ i = 0
+ l = len(hex)
+ while i < l:
+ chunk = maybe_decode(hex[i : i + 4])
+ # strip leading zeros. we do this with an re instead of
+ # with lstrip() because lstrip() didn't support chars until
+ # python 2.2.2
+ m = _leading_zero.match(chunk)
+ if not m is None:
+ chunk = m.group(1)
+ chunks.append(chunk)
+ i += 4
+ #
+ # Compress the longest subsequence of 0-value chunks to ::
+ #
+ best_start = 0
+ best_len = 0
+ start = -1
+ last_was_zero = False
+ for i in xrange(8):
+ if chunks[i] != '0':
+ if last_was_zero:
+ end = i
+ current_len = end - start
+ if current_len > best_len:
+ best_start = start
+ best_len = current_len
+ last_was_zero = False
+ elif not last_was_zero:
+ start = i
+ last_was_zero = True
+ if last_was_zero:
+ end = 8
+ current_len = end - start
+ if current_len > best_len:
+ best_start = start
+ best_len = current_len
+ if best_len > 1:
+ if best_start == 0 and \
+ (best_len == 6 or
+ best_len == 5 and chunks[5] == 'ffff'):
+ # We have an embedded IPv4 address
+ if best_len == 6:
+ prefix = '::'
+ else:
+ prefix = '::ffff:'
+ hex = prefix + dns.ipv4.inet_ntoa(address[12:])
+ else:
+ hex = ':'.join(chunks[:best_start]) + '::' + \
+ ':'.join(chunks[best_start + best_len:])
+ else:
+ hex = ':'.join(chunks)
+ return hex
+
+_v4_ending = re.compile(br'(.*):(\d+\.\d+\.\d+\.\d+)$')
+_colon_colon_start = re.compile(br'::.*')
+_colon_colon_end = re.compile(br'.*::$')
+
+def inet_aton(text):
+ """Convert an IPv6 address in text form to binary form.
+
+ *text*, a ``text``, the IPv6 address in textual form.
+
+ Returns a ``binary``.
+ """
+
+ #
+ # Our aim here is not something fast; we just want something that works.
+ #
+ if not isinstance(text, binary_type):
+ text = text.encode()
+
+ if text == b'::':
+ text = b'0::'
+ #
+ # Get rid of the icky dot-quad syntax if we have it.
+ #
+ m = _v4_ending.match(text)
+ if not m is None:
+ b = bytearray(dns.ipv4.inet_aton(m.group(2)))
+ text = (u"{}:{:02x}{:02x}:{:02x}{:02x}".format(m.group(1).decode(),
+ b[0], b[1], b[2],
+ b[3])).encode()
+ #
+ # Try to turn '::' into ':'; if no match try to
+ # turn '::' into ':'
+ #
+ m = _colon_colon_start.match(text)
+ if not m is None:
+ text = text[1:]
+ else:
+ m = _colon_colon_end.match(text)
+ if not m is None:
+ text = text[:-1]
+ #
+ # Now canonicalize into 8 chunks of 4 hex digits each
+ #
+ chunks = text.split(b':')
+ l = len(chunks)
+ if l > 8:
+ raise dns.exception.SyntaxError
+ seen_empty = False
+ canonical = []
+ for c in chunks:
+ if c == b'':
+ if seen_empty:
+ raise dns.exception.SyntaxError
+ seen_empty = True
+ for i in xrange(0, 8 - l + 1):
+ canonical.append(b'0000')
+ else:
+ lc = len(c)
+ if lc > 4:
+ raise dns.exception.SyntaxError
+ if lc != 4:
+ c = (b'0' * (4 - lc)) + c
+ canonical.append(c)
+ if l < 8 and not seen_empty:
+ raise dns.exception.SyntaxError
+ text = b''.join(canonical)
+
+ #
+ # Finally we can go to binary.
+ #
+ try:
+ return binascii.unhexlify(text)
+ except (binascii.Error, TypeError):
+ raise dns.exception.SyntaxError
+
+_mapped_prefix = b'\x00' * 10 + b'\xff\xff'
+
+def is_mapped(address):
+ """Is the specified address a mapped IPv4 address?
+
+ *address*, a ``binary`` is an IPv6 address in binary form.
+
+ Returns a ``bool``.
+ """
+
+ return address.startswith(_mapped_prefix)
diff --git a/openpype/vendor/python/python_2/dns/message.py b/openpype/vendor/python/python_2/dns/message.py
new file mode 100644
index 0000000000..9d2b2f43c9
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/message.py
@@ -0,0 +1,1175 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Messages"""
+
+from __future__ import absolute_import
+
+from io import StringIO
+import struct
+import time
+
+import dns.edns
+import dns.exception
+import dns.flags
+import dns.name
+import dns.opcode
+import dns.entropy
+import dns.rcode
+import dns.rdata
+import dns.rdataclass
+import dns.rdatatype
+import dns.rrset
+import dns.renderer
+import dns.tsig
+import dns.wiredata
+
+from ._compat import long, xrange, string_types
+
+
+class ShortHeader(dns.exception.FormError):
+ """The DNS packet passed to from_wire() is too short."""
+
+
+class TrailingJunk(dns.exception.FormError):
+ """The DNS packet passed to from_wire() has extra junk at the end of it."""
+
+
+class UnknownHeaderField(dns.exception.DNSException):
+ """The header field name was not recognized when converting from text
+ into a message."""
+
+
+class BadEDNS(dns.exception.FormError):
+ """An OPT record occurred somewhere other than the start of
+ the additional data section."""
+
+
+class BadTSIG(dns.exception.FormError):
+ """A TSIG record occurred somewhere other than the end of
+ the additional data section."""
+
+
+class UnknownTSIGKey(dns.exception.DNSException):
+ """A TSIG with an unknown key was received."""
+
+
+#: The question section number
+QUESTION = 0
+
+#: The answer section number
+ANSWER = 1
+
+#: The authority section number
+AUTHORITY = 2
+
+#: The additional section number
+ADDITIONAL = 3
+
+class Message(object):
+ """A DNS message."""
+
+ def __init__(self, id=None):
+ if id is None:
+ self.id = dns.entropy.random_16()
+ else:
+ self.id = id
+ self.flags = 0
+ self.question = []
+ self.answer = []
+ self.authority = []
+ self.additional = []
+ self.edns = -1
+ self.ednsflags = 0
+ self.payload = 0
+ self.options = []
+ self.request_payload = 0
+ self.keyring = None
+ self.keyname = None
+ self.keyalgorithm = dns.tsig.default_algorithm
+ self.request_mac = b''
+ self.other_data = b''
+ self.tsig_error = 0
+ self.fudge = 300
+ self.original_id = self.id
+ self.mac = b''
+ self.xfr = False
+ self.origin = None
+ self.tsig_ctx = None
+ self.had_tsig = False
+ self.multi = False
+ self.first = True
+ self.index = {}
+
+ def __repr__(self):
+ return ''
+
+ def __str__(self):
+ return self.to_text()
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ """Convert the message to text.
+
+ The *origin*, *relativize*, and any other keyword
+ arguments are passed to the RRset ``to_wire()`` method.
+
+ Returns a ``text``.
+ """
+
+ s = StringIO()
+ s.write(u'id %d\n' % self.id)
+ s.write(u'opcode %s\n' %
+ dns.opcode.to_text(dns.opcode.from_flags(self.flags)))
+ rc = dns.rcode.from_flags(self.flags, self.ednsflags)
+ s.write(u'rcode %s\n' % dns.rcode.to_text(rc))
+ s.write(u'flags %s\n' % dns.flags.to_text(self.flags))
+ if self.edns >= 0:
+ s.write(u'edns %s\n' % self.edns)
+ if self.ednsflags != 0:
+ s.write(u'eflags %s\n' %
+ dns.flags.edns_to_text(self.ednsflags))
+ s.write(u'payload %d\n' % self.payload)
+ for opt in self.options:
+ s.write(u'option %s\n' % opt.to_text())
+ is_update = dns.opcode.is_update(self.flags)
+ if is_update:
+ s.write(u';ZONE\n')
+ else:
+ s.write(u';QUESTION\n')
+ for rrset in self.question:
+ s.write(rrset.to_text(origin, relativize, **kw))
+ s.write(u'\n')
+ if is_update:
+ s.write(u';PREREQ\n')
+ else:
+ s.write(u';ANSWER\n')
+ for rrset in self.answer:
+ s.write(rrset.to_text(origin, relativize, **kw))
+ s.write(u'\n')
+ if is_update:
+ s.write(u';UPDATE\n')
+ else:
+ s.write(u';AUTHORITY\n')
+ for rrset in self.authority:
+ s.write(rrset.to_text(origin, relativize, **kw))
+ s.write(u'\n')
+ s.write(u';ADDITIONAL\n')
+ for rrset in self.additional:
+ s.write(rrset.to_text(origin, relativize, **kw))
+ s.write(u'\n')
+ #
+ # We strip off the final \n so the caller can print the result without
+ # doing weird things to get around eccentricities in Python print
+ # formatting
+ #
+ return s.getvalue()[:-1]
+
+ def __eq__(self, other):
+ """Two messages are equal if they have the same content in the
+ header, question, answer, and authority sections.
+
+ Returns a ``bool``.
+ """
+
+ if not isinstance(other, Message):
+ return False
+ if self.id != other.id:
+ return False
+ if self.flags != other.flags:
+ return False
+ for n in self.question:
+ if n not in other.question:
+ return False
+ for n in other.question:
+ if n not in self.question:
+ return False
+ for n in self.answer:
+ if n not in other.answer:
+ return False
+ for n in other.answer:
+ if n not in self.answer:
+ return False
+ for n in self.authority:
+ if n not in other.authority:
+ return False
+ for n in other.authority:
+ if n not in self.authority:
+ return False
+ return True
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+ def is_response(self, other):
+ """Is this message a response to *other*?
+
+ Returns a ``bool``.
+ """
+
+ if other.flags & dns.flags.QR == 0 or \
+ self.id != other.id or \
+ dns.opcode.from_flags(self.flags) != \
+ dns.opcode.from_flags(other.flags):
+ return False
+ if dns.rcode.from_flags(other.flags, other.ednsflags) != \
+ dns.rcode.NOERROR:
+ return True
+ if dns.opcode.is_update(self.flags):
+ return True
+ for n in self.question:
+ if n not in other.question:
+ return False
+ for n in other.question:
+ if n not in self.question:
+ return False
+ return True
+
+ def section_number(self, section):
+ """Return the "section number" of the specified section for use
+ in indexing. The question section is 0, the answer section is 1,
+ the authority section is 2, and the additional section is 3.
+
+ *section* is one of the section attributes of this message.
+
+ Raises ``ValueError`` if the section isn't known.
+
+ Returns an ``int``.
+ """
+
+ if section is self.question:
+ return QUESTION
+ elif section is self.answer:
+ return ANSWER
+ elif section is self.authority:
+ return AUTHORITY
+ elif section is self.additional:
+ return ADDITIONAL
+ else:
+ raise ValueError('unknown section')
+
+ def section_from_number(self, number):
+ """Return the "section number" of the specified section for use
+ in indexing. The question section is 0, the answer section is 1,
+ the authority section is 2, and the additional section is 3.
+
+ *section* is one of the section attributes of this message.
+
+ Raises ``ValueError`` if the section isn't known.
+
+ Returns an ``int``.
+ """
+
+ if number == QUESTION:
+ return self.question
+ elif number == ANSWER:
+ return self.answer
+ elif number == AUTHORITY:
+ return self.authority
+ elif number == ADDITIONAL:
+ return self.additional
+ else:
+ raise ValueError('unknown section')
+
+ def find_rrset(self, section, name, rdclass, rdtype,
+ covers=dns.rdatatype.NONE, deleting=None, create=False,
+ force_unique=False):
+ """Find the RRset with the given attributes in the specified section.
+
+ *section*, an ``int`` section number, or one of the section
+ attributes of this message. This specifies the
+ the section of the message to search. For example::
+
+ my_message.find_rrset(my_message.answer, name, rdclass, rdtype)
+ my_message.find_rrset(dns.message.ANSWER, name, rdclass, rdtype)
+
+ *name*, a ``dns.name.Name``, the name of the RRset.
+
+ *rdclass*, an ``int``, the class of the RRset.
+
+ *rdtype*, an ``int``, the type of the RRset.
+
+ *covers*, an ``int`` or ``None``, the covers value of the RRset.
+ The default is ``None``.
+
+ *deleting*, an ``int`` or ``None``, the deleting value of the RRset.
+ The default is ``None``.
+
+ *create*, a ``bool``. If ``True``, create the RRset if it is not found.
+ The created RRset is appended to *section*.
+
+ *force_unique*, a ``bool``. If ``True`` and *create* is also ``True``,
+ create a new RRset regardless of whether a matching RRset exists
+ already. The default is ``False``. This is useful when creating
+ DDNS Update messages, as order matters for them.
+
+ Raises ``KeyError`` if the RRset was not found and create was
+ ``False``.
+
+ Returns a ``dns.rrset.RRset object``.
+ """
+
+ if isinstance(section, int):
+ section_number = section
+ section = self.section_from_number(section_number)
+ else:
+ section_number = self.section_number(section)
+ key = (section_number, name, rdclass, rdtype, covers, deleting)
+ if not force_unique:
+ if self.index is not None:
+ rrset = self.index.get(key)
+ if rrset is not None:
+ return rrset
+ else:
+ for rrset in section:
+ if rrset.match(name, rdclass, rdtype, covers, deleting):
+ return rrset
+ if not create:
+ raise KeyError
+ rrset = dns.rrset.RRset(name, rdclass, rdtype, covers, deleting)
+ section.append(rrset)
+ if self.index is not None:
+ self.index[key] = rrset
+ return rrset
+
+ def get_rrset(self, section, name, rdclass, rdtype,
+ covers=dns.rdatatype.NONE, deleting=None, create=False,
+ force_unique=False):
+ """Get the RRset with the given attributes in the specified section.
+
+ If the RRset is not found, None is returned.
+
+ *section*, an ``int`` section number, or one of the section
+ attributes of this message. This specifies the
+ the section of the message to search. For example::
+
+ my_message.get_rrset(my_message.answer, name, rdclass, rdtype)
+ my_message.get_rrset(dns.message.ANSWER, name, rdclass, rdtype)
+
+ *name*, a ``dns.name.Name``, the name of the RRset.
+
+ *rdclass*, an ``int``, the class of the RRset.
+
+ *rdtype*, an ``int``, the type of the RRset.
+
+ *covers*, an ``int`` or ``None``, the covers value of the RRset.
+ The default is ``None``.
+
+ *deleting*, an ``int`` or ``None``, the deleting value of the RRset.
+ The default is ``None``.
+
+ *create*, a ``bool``. If ``True``, create the RRset if it is not found.
+ The created RRset is appended to *section*.
+
+ *force_unique*, a ``bool``. If ``True`` and *create* is also ``True``,
+ create a new RRset regardless of whether a matching RRset exists
+ already. The default is ``False``. This is useful when creating
+ DDNS Update messages, as order matters for them.
+
+ Returns a ``dns.rrset.RRset object`` or ``None``.
+ """
+
+ try:
+ rrset = self.find_rrset(section, name, rdclass, rdtype, covers,
+ deleting, create, force_unique)
+ except KeyError:
+ rrset = None
+ return rrset
+
+ def to_wire(self, origin=None, max_size=0, **kw):
+ """Return a string containing the message in DNS compressed wire
+ format.
+
+ Additional keyword arguments are passed to the RRset ``to_wire()``
+ method.
+
+ *origin*, a ``dns.name.Name`` or ``None``, the origin to be appended
+ to any relative names.
+
+ *max_size*, an ``int``, the maximum size of the wire format
+ output; default is 0, which means "the message's request
+ payload, if nonzero, or 65535".
+
+ Raises ``dns.exception.TooBig`` if *max_size* was exceeded.
+
+ Returns a ``binary``.
+ """
+
+ if max_size == 0:
+ if self.request_payload != 0:
+ max_size = self.request_payload
+ else:
+ max_size = 65535
+ if max_size < 512:
+ max_size = 512
+ elif max_size > 65535:
+ max_size = 65535
+ r = dns.renderer.Renderer(self.id, self.flags, max_size, origin)
+ for rrset in self.question:
+ r.add_question(rrset.name, rrset.rdtype, rrset.rdclass)
+ for rrset in self.answer:
+ r.add_rrset(dns.renderer.ANSWER, rrset, **kw)
+ for rrset in self.authority:
+ r.add_rrset(dns.renderer.AUTHORITY, rrset, **kw)
+ if self.edns >= 0:
+ r.add_edns(self.edns, self.ednsflags, self.payload, self.options)
+ for rrset in self.additional:
+ r.add_rrset(dns.renderer.ADDITIONAL, rrset, **kw)
+ r.write_header()
+ if self.keyname is not None:
+ r.add_tsig(self.keyname, self.keyring[self.keyname],
+ self.fudge, self.original_id, self.tsig_error,
+ self.other_data, self.request_mac,
+ self.keyalgorithm)
+ self.mac = r.mac
+ return r.get_wire()
+
+ def use_tsig(self, keyring, keyname=None, fudge=300,
+ original_id=None, tsig_error=0, other_data=b'',
+ algorithm=dns.tsig.default_algorithm):
+ """When sending, a TSIG signature using the specified keyring
+ and keyname should be added.
+
+ See the documentation of the Message class for a complete
+ description of the keyring dictionary.
+
+ *keyring*, a ``dict``, the TSIG keyring to use. If a
+ *keyring* is specified but a *keyname* is not, then the key
+ used will be the first key in the *keyring*. Note that the
+ order of keys in a dictionary is not defined, so applications
+ should supply a keyname when a keyring is used, unless they
+ know the keyring contains only one key.
+
+ *keyname*, a ``dns.name.Name`` or ``None``, the name of the TSIG key
+ to use; defaults to ``None``. The key must be defined in the keyring.
+
+ *fudge*, an ``int``, the TSIG time fudge.
+
+ *original_id*, an ``int``, the TSIG original id. If ``None``,
+ the message's id is used.
+
+ *tsig_error*, an ``int``, the TSIG error code.
+
+ *other_data*, a ``binary``, the TSIG other data.
+
+ *algorithm*, a ``dns.name.Name``, the TSIG algorithm to use.
+ """
+
+ self.keyring = keyring
+ if keyname is None:
+ self.keyname = list(self.keyring.keys())[0]
+ else:
+ if isinstance(keyname, string_types):
+ keyname = dns.name.from_text(keyname)
+ self.keyname = keyname
+ self.keyalgorithm = algorithm
+ self.fudge = fudge
+ if original_id is None:
+ self.original_id = self.id
+ else:
+ self.original_id = original_id
+ self.tsig_error = tsig_error
+ self.other_data = other_data
+
+ def use_edns(self, edns=0, ednsflags=0, payload=1280, request_payload=None,
+ options=None):
+ """Configure EDNS behavior.
+
+ *edns*, an ``int``, is the EDNS level to use. Specifying
+ ``None``, ``False``, or ``-1`` means "do not use EDNS", and in this case
+ the other parameters are ignored. Specifying ``True`` is
+ equivalent to specifying 0, i.e. "use EDNS0".
+
+ *ednsflags*, an ``int``, the EDNS flag values.
+
+ *payload*, an ``int``, is the EDNS sender's payload field, which is the
+ maximum size of UDP datagram the sender can handle. I.e. how big
+ a response to this message can be.
+
+ *request_payload*, an ``int``, is the EDNS payload size to use when
+ sending this message. If not specified, defaults to the value of
+ *payload*.
+
+ *options*, a list of ``dns.edns.Option`` objects or ``None``, the EDNS
+ options.
+ """
+
+ if edns is None or edns is False:
+ edns = -1
+ if edns is True:
+ edns = 0
+ if request_payload is None:
+ request_payload = payload
+ if edns < 0:
+ ednsflags = 0
+ payload = 0
+ request_payload = 0
+ options = []
+ else:
+ # make sure the EDNS version in ednsflags agrees with edns
+ ednsflags &= long(0xFF00FFFF)
+ ednsflags |= (edns << 16)
+ if options is None:
+ options = []
+ self.edns = edns
+ self.ednsflags = ednsflags
+ self.payload = payload
+ self.options = options
+ self.request_payload = request_payload
+
+ def want_dnssec(self, wanted=True):
+ """Enable or disable 'DNSSEC desired' flag in requests.
+
+ *wanted*, a ``bool``. If ``True``, then DNSSEC data is
+ desired in the response, EDNS is enabled if required, and then
+ the DO bit is set. If ``False``, the DO bit is cleared if
+ EDNS is enabled.
+ """
+
+ if wanted:
+ if self.edns < 0:
+ self.use_edns()
+ self.ednsflags |= dns.flags.DO
+ elif self.edns >= 0:
+ self.ednsflags &= ~dns.flags.DO
+
+ def rcode(self):
+ """Return the rcode.
+
+ Returns an ``int``.
+ """
+ return dns.rcode.from_flags(self.flags, self.ednsflags)
+
+ def set_rcode(self, rcode):
+ """Set the rcode.
+
+ *rcode*, an ``int``, is the rcode to set.
+ """
+ (value, evalue) = dns.rcode.to_flags(rcode)
+ self.flags &= 0xFFF0
+ self.flags |= value
+ self.ednsflags &= long(0x00FFFFFF)
+ self.ednsflags |= evalue
+ if self.ednsflags != 0 and self.edns < 0:
+ self.edns = 0
+
+ def opcode(self):
+ """Return the opcode.
+
+ Returns an ``int``.
+ """
+ return dns.opcode.from_flags(self.flags)
+
+ def set_opcode(self, opcode):
+ """Set the opcode.
+
+ *opcode*, an ``int``, is the opcode to set.
+ """
+ self.flags &= 0x87FF
+ self.flags |= dns.opcode.to_flags(opcode)
+
+
+class _WireReader(object):
+
+ """Wire format reader.
+
+ wire: a binary, is the wire-format message.
+ message: The message object being built
+ current: When building a message object from wire format, this
+ variable contains the offset from the beginning of wire of the next octet
+ to be read.
+ updating: Is the message a dynamic update?
+ one_rr_per_rrset: Put each RR into its own RRset?
+ ignore_trailing: Ignore trailing junk at end of request?
+ zone_rdclass: The class of the zone in messages which are
+ DNS dynamic updates.
+ """
+
+ def __init__(self, wire, message, question_only=False,
+ one_rr_per_rrset=False, ignore_trailing=False):
+ self.wire = dns.wiredata.maybe_wrap(wire)
+ self.message = message
+ self.current = 0
+ self.updating = False
+ self.zone_rdclass = dns.rdataclass.IN
+ self.question_only = question_only
+ self.one_rr_per_rrset = one_rr_per_rrset
+ self.ignore_trailing = ignore_trailing
+
+ def _get_question(self, qcount):
+ """Read the next *qcount* records from the wire data and add them to
+ the question section.
+ """
+
+ if self.updating and qcount > 1:
+ raise dns.exception.FormError
+
+ for i in xrange(0, qcount):
+ (qname, used) = dns.name.from_wire(self.wire, self.current)
+ if self.message.origin is not None:
+ qname = qname.relativize(self.message.origin)
+ self.current = self.current + used
+ (rdtype, rdclass) = \
+ struct.unpack('!HH',
+ self.wire[self.current:self.current + 4])
+ self.current = self.current + 4
+ self.message.find_rrset(self.message.question, qname,
+ rdclass, rdtype, create=True,
+ force_unique=True)
+ if self.updating:
+ self.zone_rdclass = rdclass
+
+ def _get_section(self, section, count):
+ """Read the next I{count} records from the wire data and add them to
+ the specified section.
+
+ section: the section of the message to which to add records
+ count: the number of records to read
+ """
+
+ if self.updating or self.one_rr_per_rrset:
+ force_unique = True
+ else:
+ force_unique = False
+ seen_opt = False
+ for i in xrange(0, count):
+ rr_start = self.current
+ (name, used) = dns.name.from_wire(self.wire, self.current)
+ absolute_name = name
+ if self.message.origin is not None:
+ name = name.relativize(self.message.origin)
+ self.current = self.current + used
+ (rdtype, rdclass, ttl, rdlen) = \
+ struct.unpack('!HHIH',
+ self.wire[self.current:self.current + 10])
+ self.current = self.current + 10
+ if rdtype == dns.rdatatype.OPT:
+ if section is not self.message.additional or seen_opt:
+ raise BadEDNS
+ self.message.payload = rdclass
+ self.message.ednsflags = ttl
+ self.message.edns = (ttl & 0xff0000) >> 16
+ self.message.options = []
+ current = self.current
+ optslen = rdlen
+ while optslen > 0:
+ (otype, olen) = \
+ struct.unpack('!HH',
+ self.wire[current:current + 4])
+ current = current + 4
+ opt = dns.edns.option_from_wire(
+ otype, self.wire, current, olen)
+ self.message.options.append(opt)
+ current = current + olen
+ optslen = optslen - 4 - olen
+ seen_opt = True
+ elif rdtype == dns.rdatatype.TSIG:
+ if not (section is self.message.additional and
+ i == (count - 1)):
+ raise BadTSIG
+ if self.message.keyring is None:
+ raise UnknownTSIGKey('got signed message without keyring')
+ secret = self.message.keyring.get(absolute_name)
+ if secret is None:
+ raise UnknownTSIGKey("key '%s' unknown" % name)
+ self.message.keyname = absolute_name
+ (self.message.keyalgorithm, self.message.mac) = \
+ dns.tsig.get_algorithm_and_mac(self.wire, self.current,
+ rdlen)
+ self.message.tsig_ctx = \
+ dns.tsig.validate(self.wire,
+ absolute_name,
+ secret,
+ int(time.time()),
+ self.message.request_mac,
+ rr_start,
+ self.current,
+ rdlen,
+ self.message.tsig_ctx,
+ self.message.multi,
+ self.message.first)
+ self.message.had_tsig = True
+ else:
+ if ttl < 0:
+ ttl = 0
+ if self.updating and \
+ (rdclass == dns.rdataclass.ANY or
+ rdclass == dns.rdataclass.NONE):
+ deleting = rdclass
+ rdclass = self.zone_rdclass
+ else:
+ deleting = None
+ if deleting == dns.rdataclass.ANY or \
+ (deleting == dns.rdataclass.NONE and
+ section is self.message.answer):
+ covers = dns.rdatatype.NONE
+ rd = None
+ else:
+ rd = dns.rdata.from_wire(rdclass, rdtype, self.wire,
+ self.current, rdlen,
+ self.message.origin)
+ covers = rd.covers()
+ if self.message.xfr and rdtype == dns.rdatatype.SOA:
+ force_unique = True
+ rrset = self.message.find_rrset(section, name,
+ rdclass, rdtype, covers,
+ deleting, True, force_unique)
+ if rd is not None:
+ rrset.add(rd, ttl)
+ self.current = self.current + rdlen
+
+ def read(self):
+ """Read a wire format DNS message and build a dns.message.Message
+ object."""
+
+ l = len(self.wire)
+ if l < 12:
+ raise ShortHeader
+ (self.message.id, self.message.flags, qcount, ancount,
+ aucount, adcount) = struct.unpack('!HHHHHH', self.wire[:12])
+ self.current = 12
+ if dns.opcode.is_update(self.message.flags):
+ self.updating = True
+ self._get_question(qcount)
+ if self.question_only:
+ return
+ self._get_section(self.message.answer, ancount)
+ self._get_section(self.message.authority, aucount)
+ self._get_section(self.message.additional, adcount)
+ if not self.ignore_trailing and self.current != l:
+ raise TrailingJunk
+ if self.message.multi and self.message.tsig_ctx and \
+ not self.message.had_tsig:
+ self.message.tsig_ctx.update(self.wire)
+
+
+def from_wire(wire, keyring=None, request_mac=b'', xfr=False, origin=None,
+ tsig_ctx=None, multi=False, first=True,
+ question_only=False, one_rr_per_rrset=False,
+ ignore_trailing=False):
+ """Convert a DNS wire format message into a message
+ object.
+
+ *keyring*, a ``dict``, the keyring to use if the message is signed.
+
+ *request_mac*, a ``binary``. If the message is a response to a
+ TSIG-signed request, *request_mac* should be set to the MAC of
+ that request.
+
+ *xfr*, a ``bool``, should be set to ``True`` if this message is part of
+ a zone transfer.
+
+ *origin*, a ``dns.name.Name`` or ``None``. If the message is part
+ of a zone transfer, *origin* should be the origin name of the
+ zone.
+
+ *tsig_ctx*, a ``hmac.HMAC`` objext, the ongoing TSIG context, used
+ when validating zone transfers.
+
+ *multi*, a ``bool``, should be set to ``True`` if this message
+ part of a multiple message sequence.
+
+ *first*, a ``bool``, should be set to ``True`` if this message is
+ stand-alone, or the first message in a multi-message sequence.
+
+ *question_only*, a ``bool``. If ``True``, read only up to
+ the end of the question section.
+
+ *one_rr_per_rrset*, a ``bool``. If ``True``, put each RR into its
+ own RRset.
+
+ *ignore_trailing*, a ``bool``. If ``True``, ignore trailing
+ junk at end of the message.
+
+ Raises ``dns.message.ShortHeader`` if the message is less than 12 octets
+ long.
+
+ Raises ``dns.messaage.TrailingJunk`` if there were octets in the message
+ past the end of the proper DNS message, and *ignore_trailing* is ``False``.
+
+ Raises ``dns.message.BadEDNS`` if an OPT record was in the
+ wrong section, or occurred more than once.
+
+ Raises ``dns.message.BadTSIG`` if a TSIG record was not the last
+ record of the additional data section.
+
+ Returns a ``dns.message.Message``.
+ """
+
+ m = Message(id=0)
+ m.keyring = keyring
+ m.request_mac = request_mac
+ m.xfr = xfr
+ m.origin = origin
+ m.tsig_ctx = tsig_ctx
+ m.multi = multi
+ m.first = first
+
+ reader = _WireReader(wire, m, question_only, one_rr_per_rrset,
+ ignore_trailing)
+ reader.read()
+
+ return m
+
+
+class _TextReader(object):
+
+ """Text format reader.
+
+ tok: the tokenizer.
+ message: The message object being built.
+ updating: Is the message a dynamic update?
+ zone_rdclass: The class of the zone in messages which are
+ DNS dynamic updates.
+ last_name: The most recently read name when building a message object.
+ """
+
+ def __init__(self, text, message):
+ self.message = message
+ self.tok = dns.tokenizer.Tokenizer(text)
+ self.last_name = None
+ self.zone_rdclass = dns.rdataclass.IN
+ self.updating = False
+
+ def _header_line(self, section):
+ """Process one line from the text format header section."""
+
+ token = self.tok.get()
+ what = token.value
+ if what == 'id':
+ self.message.id = self.tok.get_int()
+ elif what == 'flags':
+ while True:
+ token = self.tok.get()
+ if not token.is_identifier():
+ self.tok.unget(token)
+ break
+ self.message.flags = self.message.flags | \
+ dns.flags.from_text(token.value)
+ if dns.opcode.is_update(self.message.flags):
+ self.updating = True
+ elif what == 'edns':
+ self.message.edns = self.tok.get_int()
+ self.message.ednsflags = self.message.ednsflags | \
+ (self.message.edns << 16)
+ elif what == 'eflags':
+ if self.message.edns < 0:
+ self.message.edns = 0
+ while True:
+ token = self.tok.get()
+ if not token.is_identifier():
+ self.tok.unget(token)
+ break
+ self.message.ednsflags = self.message.ednsflags | \
+ dns.flags.edns_from_text(token.value)
+ elif what == 'payload':
+ self.message.payload = self.tok.get_int()
+ if self.message.edns < 0:
+ self.message.edns = 0
+ elif what == 'opcode':
+ text = self.tok.get_string()
+ self.message.flags = self.message.flags | \
+ dns.opcode.to_flags(dns.opcode.from_text(text))
+ elif what == 'rcode':
+ text = self.tok.get_string()
+ self.message.set_rcode(dns.rcode.from_text(text))
+ else:
+ raise UnknownHeaderField
+ self.tok.get_eol()
+
+ def _question_line(self, section):
+ """Process one line from the text format question section."""
+
+ token = self.tok.get(want_leading=True)
+ if not token.is_whitespace():
+ self.last_name = dns.name.from_text(token.value, None)
+ name = self.last_name
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ # Class
+ try:
+ rdclass = dns.rdataclass.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except dns.exception.SyntaxError:
+ raise dns.exception.SyntaxError
+ except Exception:
+ rdclass = dns.rdataclass.IN
+ # Type
+ rdtype = dns.rdatatype.from_text(token.value)
+ self.message.find_rrset(self.message.question, name,
+ rdclass, rdtype, create=True,
+ force_unique=True)
+ if self.updating:
+ self.zone_rdclass = rdclass
+ self.tok.get_eol()
+
+ def _rr_line(self, section):
+ """Process one line from the text format answer, authority, or
+ additional data sections.
+ """
+
+ deleting = None
+ # Name
+ token = self.tok.get(want_leading=True)
+ if not token.is_whitespace():
+ self.last_name = dns.name.from_text(token.value, None)
+ name = self.last_name
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ # TTL
+ try:
+ ttl = int(token.value, 0)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except dns.exception.SyntaxError:
+ raise dns.exception.SyntaxError
+ except Exception:
+ ttl = 0
+ # Class
+ try:
+ rdclass = dns.rdataclass.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ if rdclass == dns.rdataclass.ANY or rdclass == dns.rdataclass.NONE:
+ deleting = rdclass
+ rdclass = self.zone_rdclass
+ except dns.exception.SyntaxError:
+ raise dns.exception.SyntaxError
+ except Exception:
+ rdclass = dns.rdataclass.IN
+ # Type
+ rdtype = dns.rdatatype.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_eol_or_eof():
+ self.tok.unget(token)
+ rd = dns.rdata.from_text(rdclass, rdtype, self.tok, None)
+ covers = rd.covers()
+ else:
+ rd = None
+ covers = dns.rdatatype.NONE
+ rrset = self.message.find_rrset(section, name,
+ rdclass, rdtype, covers,
+ deleting, True, self.updating)
+ if rd is not None:
+ rrset.add(rd, ttl)
+
+ def read(self):
+ """Read a text format DNS message and build a dns.message.Message
+ object."""
+
+ line_method = self._header_line
+ section = None
+ while 1:
+ token = self.tok.get(True, True)
+ if token.is_eol_or_eof():
+ break
+ if token.is_comment():
+ u = token.value.upper()
+ if u == 'HEADER':
+ line_method = self._header_line
+ elif u == 'QUESTION' or u == 'ZONE':
+ line_method = self._question_line
+ section = self.message.question
+ elif u == 'ANSWER' or u == 'PREREQ':
+ line_method = self._rr_line
+ section = self.message.answer
+ elif u == 'AUTHORITY' or u == 'UPDATE':
+ line_method = self._rr_line
+ section = self.message.authority
+ elif u == 'ADDITIONAL':
+ line_method = self._rr_line
+ section = self.message.additional
+ self.tok.get_eol()
+ continue
+ self.tok.unget(token)
+ line_method(section)
+
+
+def from_text(text):
+ """Convert the text format message into a message object.
+
+ *text*, a ``text``, the text format message.
+
+ Raises ``dns.message.UnknownHeaderField`` if a header is unknown.
+
+ Raises ``dns.exception.SyntaxError`` if the text is badly formed.
+
+ Returns a ``dns.message.Message object``
+ """
+
+ # 'text' can also be a file, but we don't publish that fact
+ # since it's an implementation detail. The official file
+ # interface is from_file().
+
+ m = Message()
+
+ reader = _TextReader(text, m)
+ reader.read()
+
+ return m
+
+
+def from_file(f):
+ """Read the next text format message from the specified file.
+
+ *f*, a ``file`` or ``text``. If *f* is text, it is treated as the
+ pathname of a file to open.
+
+ Raises ``dns.message.UnknownHeaderField`` if a header is unknown.
+
+ Raises ``dns.exception.SyntaxError`` if the text is badly formed.
+
+ Returns a ``dns.message.Message object``
+ """
+
+ str_type = string_types
+ opts = 'rU'
+
+ if isinstance(f, str_type):
+ f = open(f, opts)
+ want_close = True
+ else:
+ want_close = False
+
+ try:
+ m = from_text(f)
+ finally:
+ if want_close:
+ f.close()
+ return m
+
+
+def make_query(qname, rdtype, rdclass=dns.rdataclass.IN, use_edns=None,
+ want_dnssec=False, ednsflags=None, payload=None,
+ request_payload=None, options=None):
+ """Make a query message.
+
+ The query name, type, and class may all be specified either
+ as objects of the appropriate type, or as strings.
+
+ The query will have a randomly chosen query id, and its DNS flags
+ will be set to dns.flags.RD.
+
+ qname, a ``dns.name.Name`` or ``text``, the query name.
+
+ *rdtype*, an ``int`` or ``text``, the desired rdata type.
+
+ *rdclass*, an ``int`` or ``text``, the desired rdata class; the default
+ is class IN.
+
+ *use_edns*, an ``int``, ``bool`` or ``None``. The EDNS level to use; the
+ default is None (no EDNS).
+ See the description of dns.message.Message.use_edns() for the possible
+ values for use_edns and their meanings.
+
+ *want_dnssec*, a ``bool``. If ``True``, DNSSEC data is desired.
+
+ *ednsflags*, an ``int``, the EDNS flag values.
+
+ *payload*, an ``int``, is the EDNS sender's payload field, which is the
+ maximum size of UDP datagram the sender can handle. I.e. how big
+ a response to this message can be.
+
+ *request_payload*, an ``int``, is the EDNS payload size to use when
+ sending this message. If not specified, defaults to the value of
+ *payload*.
+
+ *options*, a list of ``dns.edns.Option`` objects or ``None``, the EDNS
+ options.
+
+ Returns a ``dns.message.Message``
+ """
+
+ if isinstance(qname, string_types):
+ qname = dns.name.from_text(qname)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if isinstance(rdclass, string_types):
+ rdclass = dns.rdataclass.from_text(rdclass)
+ m = Message()
+ m.flags |= dns.flags.RD
+ m.find_rrset(m.question, qname, rdclass, rdtype, create=True,
+ force_unique=True)
+ # only pass keywords on to use_edns if they have been set to a
+ # non-None value. Setting a field will turn EDNS on if it hasn't
+ # been configured.
+ kwargs = {}
+ if ednsflags is not None:
+ kwargs['ednsflags'] = ednsflags
+ if use_edns is None:
+ use_edns = 0
+ if payload is not None:
+ kwargs['payload'] = payload
+ if use_edns is None:
+ use_edns = 0
+ if request_payload is not None:
+ kwargs['request_payload'] = request_payload
+ if use_edns is None:
+ use_edns = 0
+ if options is not None:
+ kwargs['options'] = options
+ if use_edns is None:
+ use_edns = 0
+ kwargs['edns'] = use_edns
+ m.use_edns(**kwargs)
+ m.want_dnssec(want_dnssec)
+ return m
+
+
+def make_response(query, recursion_available=False, our_payload=8192,
+ fudge=300):
+ """Make a message which is a response for the specified query.
+ The message returned is really a response skeleton; it has all
+ of the infrastructure required of a response, but none of the
+ content.
+
+ The response's question section is a shallow copy of the query's
+ question section, so the query's question RRsets should not be
+ changed.
+
+ *query*, a ``dns.message.Message``, the query to respond to.
+
+ *recursion_available*, a ``bool``, should RA be set in the response?
+
+ *our_payload*, an ``int``, the payload size to advertise in EDNS
+ responses.
+
+ *fudge*, an ``int``, the TSIG time fudge.
+
+ Returns a ``dns.message.Message`` object.
+ """
+
+ if query.flags & dns.flags.QR:
+ raise dns.exception.FormError('specified query message is not a query')
+ response = dns.message.Message(query.id)
+ response.flags = dns.flags.QR | (query.flags & dns.flags.RD)
+ if recursion_available:
+ response.flags |= dns.flags.RA
+ response.set_opcode(query.opcode())
+ response.question = list(query.question)
+ if query.edns >= 0:
+ response.use_edns(0, 0, our_payload, query.payload)
+ if query.had_tsig:
+ response.use_tsig(query.keyring, query.keyname, fudge, None, 0, b'',
+ query.keyalgorithm)
+ response.request_mac = query.mac
+ return response
diff --git a/openpype/vendor/python/python_2/dns/name.py b/openpype/vendor/python/python_2/dns/name.py
new file mode 100644
index 0000000000..0bcfd83432
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/name.py
@@ -0,0 +1,994 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Names.
+"""
+
+from io import BytesIO
+import struct
+import sys
+import copy
+import encodings.idna
+try:
+ import idna
+ have_idna_2008 = True
+except ImportError:
+ have_idna_2008 = False
+
+import dns.exception
+import dns.wiredata
+
+from ._compat import long, binary_type, text_type, unichr, maybe_decode
+
+try:
+ maxint = sys.maxint # pylint: disable=sys-max-int
+except AttributeError:
+ maxint = (1 << (8 * struct.calcsize("P"))) // 2 - 1
+
+
+# fullcompare() result values
+
+#: The compared names have no relationship to each other.
+NAMERELN_NONE = 0
+#: the first name is a superdomain of the second.
+NAMERELN_SUPERDOMAIN = 1
+#: The first name is a subdomain of the second.
+NAMERELN_SUBDOMAIN = 2
+#: The compared names are equal.
+NAMERELN_EQUAL = 3
+#: The compared names have a common ancestor.
+NAMERELN_COMMONANCESTOR = 4
+
+
+class EmptyLabel(dns.exception.SyntaxError):
+ """A DNS label is empty."""
+
+
+class BadEscape(dns.exception.SyntaxError):
+ """An escaped code in a text format of DNS name is invalid."""
+
+
+class BadPointer(dns.exception.FormError):
+ """A DNS compression pointer points forward instead of backward."""
+
+
+class BadLabelType(dns.exception.FormError):
+ """The label type in DNS name wire format is unknown."""
+
+
+class NeedAbsoluteNameOrOrigin(dns.exception.DNSException):
+ """An attempt was made to convert a non-absolute name to
+ wire when there was also a non-absolute (or missing) origin."""
+
+
+class NameTooLong(dns.exception.FormError):
+ """A DNS name is > 255 octets long."""
+
+
+class LabelTooLong(dns.exception.SyntaxError):
+ """A DNS label is > 63 octets long."""
+
+
+class AbsoluteConcatenation(dns.exception.DNSException):
+ """An attempt was made to append anything other than the
+ empty name to an absolute DNS name."""
+
+
+class NoParent(dns.exception.DNSException):
+ """An attempt was made to get the parent of the root name
+ or the empty name."""
+
+class NoIDNA2008(dns.exception.DNSException):
+ """IDNA 2008 processing was requested but the idna module is not
+ available."""
+
+
+class IDNAException(dns.exception.DNSException):
+ """IDNA processing raised an exception."""
+
+ supp_kwargs = {'idna_exception'}
+ fmt = "IDNA processing exception: {idna_exception}"
+
+
+class IDNACodec(object):
+ """Abstract base class for IDNA encoder/decoders."""
+
+ def __init__(self):
+ pass
+
+ def encode(self, label):
+ raise NotImplementedError
+
+ def decode(self, label):
+ # We do not apply any IDNA policy on decode; we just
+ downcased = label.lower()
+ if downcased.startswith(b'xn--'):
+ try:
+ label = downcased[4:].decode('punycode')
+ except Exception as e:
+ raise IDNAException(idna_exception=e)
+ else:
+ label = maybe_decode(label)
+ return _escapify(label, True)
+
+
+class IDNA2003Codec(IDNACodec):
+ """IDNA 2003 encoder/decoder."""
+
+ def __init__(self, strict_decode=False):
+ """Initialize the IDNA 2003 encoder/decoder.
+
+ *strict_decode* is a ``bool``. If `True`, then IDNA2003 checking
+ is done when decoding. This can cause failures if the name
+ was encoded with IDNA2008. The default is `False`.
+ """
+
+ super(IDNA2003Codec, self).__init__()
+ self.strict_decode = strict_decode
+
+ def encode(self, label):
+ """Encode *label*."""
+
+ if label == '':
+ return b''
+ try:
+ return encodings.idna.ToASCII(label)
+ except UnicodeError:
+ raise LabelTooLong
+
+ def decode(self, label):
+ """Decode *label*."""
+ if not self.strict_decode:
+ return super(IDNA2003Codec, self).decode(label)
+ if label == b'':
+ return u''
+ try:
+ return _escapify(encodings.idna.ToUnicode(label), True)
+ except Exception as e:
+ raise IDNAException(idna_exception=e)
+
+
+class IDNA2008Codec(IDNACodec):
+ """IDNA 2008 encoder/decoder.
+
+ *uts_46* is a ``bool``. If True, apply Unicode IDNA
+ compatibility processing as described in Unicode Technical
+ Standard #46 (http://unicode.org/reports/tr46/).
+ If False, do not apply the mapping. The default is False.
+
+ *transitional* is a ``bool``: If True, use the
+ "transitional" mode described in Unicode Technical Standard
+ #46. The default is False.
+
+ *allow_pure_ascii* is a ``bool``. If True, then a label which
+ consists of only ASCII characters is allowed. This is less
+ strict than regular IDNA 2008, but is also necessary for mixed
+ names, e.g. a name with starting with "_sip._tcp." and ending
+ in an IDN suffix which would otherwise be disallowed. The
+ default is False.
+
+ *strict_decode* is a ``bool``: If True, then IDNA2008 checking
+ is done when decoding. This can cause failures if the name
+ was encoded with IDNA2003. The default is False.
+ """
+
+ def __init__(self, uts_46=False, transitional=False,
+ allow_pure_ascii=False, strict_decode=False):
+ """Initialize the IDNA 2008 encoder/decoder."""
+ super(IDNA2008Codec, self).__init__()
+ self.uts_46 = uts_46
+ self.transitional = transitional
+ self.allow_pure_ascii = allow_pure_ascii
+ self.strict_decode = strict_decode
+
+ def is_all_ascii(self, label):
+ for c in label:
+ if ord(c) > 0x7f:
+ return False
+ return True
+
+ def encode(self, label):
+ if label == '':
+ return b''
+ if self.allow_pure_ascii and self.is_all_ascii(label):
+ return label.encode('ascii')
+ if not have_idna_2008:
+ raise NoIDNA2008
+ try:
+ if self.uts_46:
+ label = idna.uts46_remap(label, False, self.transitional)
+ return idna.alabel(label)
+ except idna.IDNAError as e:
+ raise IDNAException(idna_exception=e)
+
+ def decode(self, label):
+ if not self.strict_decode:
+ return super(IDNA2008Codec, self).decode(label)
+ if label == b'':
+ return u''
+ if not have_idna_2008:
+ raise NoIDNA2008
+ try:
+ if self.uts_46:
+ label = idna.uts46_remap(label, False, False)
+ return _escapify(idna.ulabel(label), True)
+ except idna.IDNAError as e:
+ raise IDNAException(idna_exception=e)
+
+_escaped = bytearray(b'"().;\\@$')
+
+IDNA_2003_Practical = IDNA2003Codec(False)
+IDNA_2003_Strict = IDNA2003Codec(True)
+IDNA_2003 = IDNA_2003_Practical
+IDNA_2008_Practical = IDNA2008Codec(True, False, True, False)
+IDNA_2008_UTS_46 = IDNA2008Codec(True, False, False, False)
+IDNA_2008_Strict = IDNA2008Codec(False, False, False, True)
+IDNA_2008_Transitional = IDNA2008Codec(True, True, False, False)
+IDNA_2008 = IDNA_2008_Practical
+
+def _escapify(label, unicode_mode=False):
+ """Escape the characters in label which need it.
+ @param unicode_mode: escapify only special and whitespace (<= 0x20)
+ characters
+ @returns: the escaped string
+ @rtype: string"""
+ if not unicode_mode:
+ text = ''
+ if isinstance(label, text_type):
+ label = label.encode()
+ for c in bytearray(label):
+ if c in _escaped:
+ text += '\\' + chr(c)
+ elif c > 0x20 and c < 0x7F:
+ text += chr(c)
+ else:
+ text += '\\%03d' % c
+ return text.encode()
+
+ text = u''
+ if isinstance(label, binary_type):
+ label = label.decode()
+ for c in label:
+ if c > u'\x20' and c < u'\x7f':
+ text += c
+ else:
+ if c >= u'\x7f':
+ text += c
+ else:
+ text += u'\\%03d' % ord(c)
+ return text
+
+def _validate_labels(labels):
+ """Check for empty labels in the middle of a label sequence,
+ labels that are too long, and for too many labels.
+
+ Raises ``dns.name.NameTooLong`` if the name as a whole is too long.
+
+ Raises ``dns.name.EmptyLabel`` if a label is empty (i.e. the root
+ label) and appears in a position other than the end of the label
+ sequence
+
+ """
+
+ l = len(labels)
+ total = 0
+ i = -1
+ j = 0
+ for label in labels:
+ ll = len(label)
+ total += ll + 1
+ if ll > 63:
+ raise LabelTooLong
+ if i < 0 and label == b'':
+ i = j
+ j += 1
+ if total > 255:
+ raise NameTooLong
+ if i >= 0 and i != l - 1:
+ raise EmptyLabel
+
+
+def _maybe_convert_to_binary(label):
+ """If label is ``text``, convert it to ``binary``. If it is already
+ ``binary`` just return it.
+
+ """
+
+ if isinstance(label, binary_type):
+ return label
+ if isinstance(label, text_type):
+ return label.encode()
+ raise ValueError
+
+
+class Name(object):
+
+ """A DNS name.
+
+ The dns.name.Name class represents a DNS name as a tuple of
+ labels. Each label is a `binary` in DNS wire format. Instances
+ of the class are immutable.
+ """
+
+ __slots__ = ['labels']
+
+ def __init__(self, labels):
+ """*labels* is any iterable whose values are ``text`` or ``binary``.
+ """
+
+ labels = [_maybe_convert_to_binary(x) for x in labels]
+ super(Name, self).__setattr__('labels', tuple(labels))
+ _validate_labels(self.labels)
+
+ def __setattr__(self, name, value):
+ # Names are immutable
+ raise TypeError("object doesn't support attribute assignment")
+
+ def __copy__(self):
+ return Name(self.labels)
+
+ def __deepcopy__(self, memo):
+ return Name(copy.deepcopy(self.labels, memo))
+
+ def __getstate__(self):
+ # Names can be pickled
+ return {'labels': self.labels}
+
+ def __setstate__(self, state):
+ super(Name, self).__setattr__('labels', state['labels'])
+ _validate_labels(self.labels)
+
+ def is_absolute(self):
+ """Is the most significant label of this name the root label?
+
+ Returns a ``bool``.
+ """
+
+ return len(self.labels) > 0 and self.labels[-1] == b''
+
+ def is_wild(self):
+ """Is this name wild? (I.e. Is the least significant label '*'?)
+
+ Returns a ``bool``.
+ """
+
+ return len(self.labels) > 0 and self.labels[0] == b'*'
+
+ def __hash__(self):
+ """Return a case-insensitive hash of the name.
+
+ Returns an ``int``.
+ """
+
+ h = long(0)
+ for label in self.labels:
+ for c in bytearray(label.lower()):
+ h += (h << 3) + c
+ return int(h % maxint)
+
+ def fullcompare(self, other):
+ """Compare two names, returning a 3-tuple
+ ``(relation, order, nlabels)``.
+
+ *relation* describes the relation ship between the names,
+ and is one of: ``dns.name.NAMERELN_NONE``,
+ ``dns.name.NAMERELN_SUPERDOMAIN``, ``dns.name.NAMERELN_SUBDOMAIN``,
+ ``dns.name.NAMERELN_EQUAL``, or ``dns.name.NAMERELN_COMMONANCESTOR``.
+
+ *order* is < 0 if *self* < *other*, > 0 if *self* > *other*, and ==
+ 0 if *self* == *other*. A relative name is always less than an
+ absolute name. If both names have the same relativity, then
+ the DNSSEC order relation is used to order them.
+
+ *nlabels* is the number of significant labels that the two names
+ have in common.
+
+ Here are some examples. Names ending in "." are absolute names,
+ those not ending in "." are relative names.
+
+ ============= ============= =========== ===== =======
+ self other relation order nlabels
+ ============= ============= =========== ===== =======
+ www.example. www.example. equal 0 3
+ www.example. example. subdomain > 0 2
+ example. www.example. superdomain < 0 2
+ example1.com. example2.com. common anc. < 0 2
+ example1 example2. none < 0 0
+ example1. example2 none > 0 0
+ ============= ============= =========== ===== =======
+ """
+
+ sabs = self.is_absolute()
+ oabs = other.is_absolute()
+ if sabs != oabs:
+ if sabs:
+ return (NAMERELN_NONE, 1, 0)
+ else:
+ return (NAMERELN_NONE, -1, 0)
+ l1 = len(self.labels)
+ l2 = len(other.labels)
+ ldiff = l1 - l2
+ if ldiff < 0:
+ l = l1
+ else:
+ l = l2
+
+ order = 0
+ nlabels = 0
+ namereln = NAMERELN_NONE
+ while l > 0:
+ l -= 1
+ l1 -= 1
+ l2 -= 1
+ label1 = self.labels[l1].lower()
+ label2 = other.labels[l2].lower()
+ if label1 < label2:
+ order = -1
+ if nlabels > 0:
+ namereln = NAMERELN_COMMONANCESTOR
+ return (namereln, order, nlabels)
+ elif label1 > label2:
+ order = 1
+ if nlabels > 0:
+ namereln = NAMERELN_COMMONANCESTOR
+ return (namereln, order, nlabels)
+ nlabels += 1
+ order = ldiff
+ if ldiff < 0:
+ namereln = NAMERELN_SUPERDOMAIN
+ elif ldiff > 0:
+ namereln = NAMERELN_SUBDOMAIN
+ else:
+ namereln = NAMERELN_EQUAL
+ return (namereln, order, nlabels)
+
+ def is_subdomain(self, other):
+ """Is self a subdomain of other?
+
+ Note that the notion of subdomain includes equality, e.g.
+ "dnpython.org" is a subdomain of itself.
+
+ Returns a ``bool``.
+ """
+
+ (nr, o, nl) = self.fullcompare(other)
+ if nr == NAMERELN_SUBDOMAIN or nr == NAMERELN_EQUAL:
+ return True
+ return False
+
+ def is_superdomain(self, other):
+ """Is self a superdomain of other?
+
+ Note that the notion of superdomain includes equality, e.g.
+ "dnpython.org" is a superdomain of itself.
+
+ Returns a ``bool``.
+ """
+
+ (nr, o, nl) = self.fullcompare(other)
+ if nr == NAMERELN_SUPERDOMAIN or nr == NAMERELN_EQUAL:
+ return True
+ return False
+
+ def canonicalize(self):
+ """Return a name which is equal to the current name, but is in
+ DNSSEC canonical form.
+ """
+
+ return Name([x.lower() for x in self.labels])
+
+ def __eq__(self, other):
+ if isinstance(other, Name):
+ return self.fullcompare(other)[1] == 0
+ else:
+ return False
+
+ def __ne__(self, other):
+ if isinstance(other, Name):
+ return self.fullcompare(other)[1] != 0
+ else:
+ return True
+
+ def __lt__(self, other):
+ if isinstance(other, Name):
+ return self.fullcompare(other)[1] < 0
+ else:
+ return NotImplemented
+
+ def __le__(self, other):
+ if isinstance(other, Name):
+ return self.fullcompare(other)[1] <= 0
+ else:
+ return NotImplemented
+
+ def __ge__(self, other):
+ if isinstance(other, Name):
+ return self.fullcompare(other)[1] >= 0
+ else:
+ return NotImplemented
+
+ def __gt__(self, other):
+ if isinstance(other, Name):
+ return self.fullcompare(other)[1] > 0
+ else:
+ return NotImplemented
+
+ def __repr__(self):
+ return ''
+
+ def __str__(self):
+ return self.to_text(False)
+
+ def to_text(self, omit_final_dot=False):
+ """Convert name to DNS text format.
+
+ *omit_final_dot* is a ``bool``. If True, don't emit the final
+ dot (denoting the root label) for absolute names. The default
+ is False.
+
+ Returns a ``text``.
+ """
+
+ if len(self.labels) == 0:
+ return maybe_decode(b'@')
+ if len(self.labels) == 1 and self.labels[0] == b'':
+ return maybe_decode(b'.')
+ if omit_final_dot and self.is_absolute():
+ l = self.labels[:-1]
+ else:
+ l = self.labels
+ s = b'.'.join(map(_escapify, l))
+ return maybe_decode(s)
+
+ def to_unicode(self, omit_final_dot=False, idna_codec=None):
+ """Convert name to Unicode text format.
+
+ IDN ACE labels are converted to Unicode.
+
+ *omit_final_dot* is a ``bool``. If True, don't emit the final
+ dot (denoting the root label) for absolute names. The default
+ is False.
+ *idna_codec* specifies the IDNA encoder/decoder. If None, the
+ dns.name.IDNA_2003_Practical encoder/decoder is used.
+ The IDNA_2003_Practical decoder does
+ not impose any policy, it just decodes punycode, so if you
+ don't want checking for compliance, you can use this decoder
+ for IDNA2008 as well.
+
+ Returns a ``text``.
+ """
+
+ if len(self.labels) == 0:
+ return u'@'
+ if len(self.labels) == 1 and self.labels[0] == b'':
+ return u'.'
+ if omit_final_dot and self.is_absolute():
+ l = self.labels[:-1]
+ else:
+ l = self.labels
+ if idna_codec is None:
+ idna_codec = IDNA_2003_Practical
+ return u'.'.join([idna_codec.decode(x) for x in l])
+
+ def to_digestable(self, origin=None):
+ """Convert name to a format suitable for digesting in hashes.
+
+ The name is canonicalized and converted to uncompressed wire
+ format. All names in wire format are absolute. If the name
+ is a relative name, then an origin must be supplied.
+
+ *origin* is a ``dns.name.Name`` or ``None``. If the name is
+ relative and origin is not ``None``, then origin will be appended
+ to the name.
+
+ Raises ``dns.name.NeedAbsoluteNameOrOrigin`` if the name is
+ relative and no origin was provided.
+
+ Returns a ``binary``.
+ """
+
+ if not self.is_absolute():
+ if origin is None or not origin.is_absolute():
+ raise NeedAbsoluteNameOrOrigin
+ labels = list(self.labels)
+ labels.extend(list(origin.labels))
+ else:
+ labels = self.labels
+ dlabels = [struct.pack('!B%ds' % len(x), len(x), x.lower())
+ for x in labels]
+ return b''.join(dlabels)
+
+ def to_wire(self, file=None, compress=None, origin=None):
+ """Convert name to wire format, possibly compressing it.
+
+ *file* is the file where the name is emitted (typically a
+ BytesIO file). If ``None`` (the default), a ``binary``
+ containing the wire name will be returned.
+
+ *compress*, a ``dict``, is the compression table to use. If
+ ``None`` (the default), names will not be compressed.
+
+ *origin* is a ``dns.name.Name`` or ``None``. If the name is
+ relative and origin is not ``None``, then *origin* will be appended
+ to it.
+
+ Raises ``dns.name.NeedAbsoluteNameOrOrigin`` if the name is
+ relative and no origin was provided.
+
+ Returns a ``binary`` or ``None``.
+ """
+
+ if file is None:
+ file = BytesIO()
+ want_return = True
+ else:
+ want_return = False
+
+ if not self.is_absolute():
+ if origin is None or not origin.is_absolute():
+ raise NeedAbsoluteNameOrOrigin
+ labels = list(self.labels)
+ labels.extend(list(origin.labels))
+ else:
+ labels = self.labels
+ i = 0
+ for label in labels:
+ n = Name(labels[i:])
+ i += 1
+ if compress is not None:
+ pos = compress.get(n)
+ else:
+ pos = None
+ if pos is not None:
+ value = 0xc000 + pos
+ s = struct.pack('!H', value)
+ file.write(s)
+ break
+ else:
+ if compress is not None and len(n) > 1:
+ pos = file.tell()
+ if pos <= 0x3fff:
+ compress[n] = pos
+ l = len(label)
+ file.write(struct.pack('!B', l))
+ if l > 0:
+ file.write(label)
+ if want_return:
+ return file.getvalue()
+
+ def __len__(self):
+ """The length of the name (in labels).
+
+ Returns an ``int``.
+ """
+
+ return len(self.labels)
+
+ def __getitem__(self, index):
+ return self.labels[index]
+
+ def __add__(self, other):
+ return self.concatenate(other)
+
+ def __sub__(self, other):
+ return self.relativize(other)
+
+ def split(self, depth):
+ """Split a name into a prefix and suffix names at the specified depth.
+
+ *depth* is an ``int`` specifying the number of labels in the suffix
+
+ Raises ``ValueError`` if *depth* was not >= 0 and <= the length of the
+ name.
+
+ Returns the tuple ``(prefix, suffix)``.
+ """
+
+ l = len(self.labels)
+ if depth == 0:
+ return (self, dns.name.empty)
+ elif depth == l:
+ return (dns.name.empty, self)
+ elif depth < 0 or depth > l:
+ raise ValueError(
+ 'depth must be >= 0 and <= the length of the name')
+ return (Name(self[: -depth]), Name(self[-depth:]))
+
+ def concatenate(self, other):
+ """Return a new name which is the concatenation of self and other.
+
+ Raises ``dns.name.AbsoluteConcatenation`` if the name is
+ absolute and *other* is not the empty name.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if self.is_absolute() and len(other) > 0:
+ raise AbsoluteConcatenation
+ labels = list(self.labels)
+ labels.extend(list(other.labels))
+ return Name(labels)
+
+ def relativize(self, origin):
+ """If the name is a subdomain of *origin*, return a new name which is
+ the name relative to origin. Otherwise return the name.
+
+ For example, relativizing ``www.dnspython.org.`` to origin
+ ``dnspython.org.`` returns the name ``www``. Relativizing ``example.``
+ to origin ``dnspython.org.`` returns ``example.``.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if origin is not None and self.is_subdomain(origin):
+ return Name(self[: -len(origin)])
+ else:
+ return self
+
+ def derelativize(self, origin):
+ """If the name is a relative name, return a new name which is the
+ concatenation of the name and origin. Otherwise return the name.
+
+ For example, derelativizing ``www`` to origin ``dnspython.org.``
+ returns the name ``www.dnspython.org.``. Derelativizing ``example.``
+ to origin ``dnspython.org.`` returns ``example.``.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if not self.is_absolute():
+ return self.concatenate(origin)
+ else:
+ return self
+
+ def choose_relativity(self, origin=None, relativize=True):
+ """Return a name with the relativity desired by the caller.
+
+ If *origin* is ``None``, then the name is returned.
+ Otherwise, if *relativize* is ``True`` the name is
+ relativized, and if *relativize* is ``False`` the name is
+ derelativized.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if origin:
+ if relativize:
+ return self.relativize(origin)
+ else:
+ return self.derelativize(origin)
+ else:
+ return self
+
+ def parent(self):
+ """Return the parent of the name.
+
+ For example, the parent of ``www.dnspython.org.`` is ``dnspython.org``.
+
+ Raises ``dns.name.NoParent`` if the name is either the root name or the
+ empty name, and thus has no parent.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if self == root or self == empty:
+ raise NoParent
+ return Name(self.labels[1:])
+
+#: The root name, '.'
+root = Name([b''])
+
+#: The empty name.
+empty = Name([])
+
+def from_unicode(text, origin=root, idna_codec=None):
+ """Convert unicode text into a Name object.
+
+ Labels are encoded in IDN ACE form according to rules specified by
+ the IDNA codec.
+
+ *text*, a ``text``, is the text to convert into a name.
+
+ *origin*, a ``dns.name.Name``, specifies the origin to
+ append to non-absolute names. The default is the root name.
+
+ *idna_codec*, a ``dns.name.IDNACodec``, specifies the IDNA
+ encoder/decoder. If ``None``, the default IDNA 2003 encoder/decoder
+ is used.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if not isinstance(text, text_type):
+ raise ValueError("input to from_unicode() must be a unicode string")
+ if not (origin is None or isinstance(origin, Name)):
+ raise ValueError("origin must be a Name or None")
+ labels = []
+ label = u''
+ escaping = False
+ edigits = 0
+ total = 0
+ if idna_codec is None:
+ idna_codec = IDNA_2003
+ if text == u'@':
+ text = u''
+ if text:
+ if text == u'.':
+ return Name([b'']) # no Unicode "u" on this constant!
+ for c in text:
+ if escaping:
+ if edigits == 0:
+ if c.isdigit():
+ total = int(c)
+ edigits += 1
+ else:
+ label += c
+ escaping = False
+ else:
+ if not c.isdigit():
+ raise BadEscape
+ total *= 10
+ total += int(c)
+ edigits += 1
+ if edigits == 3:
+ escaping = False
+ label += unichr(total)
+ elif c in [u'.', u'\u3002', u'\uff0e', u'\uff61']:
+ if len(label) == 0:
+ raise EmptyLabel
+ labels.append(idna_codec.encode(label))
+ label = u''
+ elif c == u'\\':
+ escaping = True
+ edigits = 0
+ total = 0
+ else:
+ label += c
+ if escaping:
+ raise BadEscape
+ if len(label) > 0:
+ labels.append(idna_codec.encode(label))
+ else:
+ labels.append(b'')
+
+ if (len(labels) == 0 or labels[-1] != b'') and origin is not None:
+ labels.extend(list(origin.labels))
+ return Name(labels)
+
+
+def from_text(text, origin=root, idna_codec=None):
+ """Convert text into a Name object.
+
+ *text*, a ``text``, is the text to convert into a name.
+
+ *origin*, a ``dns.name.Name``, specifies the origin to
+ append to non-absolute names. The default is the root name.
+
+ *idna_codec*, a ``dns.name.IDNACodec``, specifies the IDNA
+ encoder/decoder. If ``None``, the default IDNA 2003 encoder/decoder
+ is used.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if isinstance(text, text_type):
+ return from_unicode(text, origin, idna_codec)
+ if not isinstance(text, binary_type):
+ raise ValueError("input to from_text() must be a string")
+ if not (origin is None or isinstance(origin, Name)):
+ raise ValueError("origin must be a Name or None")
+ labels = []
+ label = b''
+ escaping = False
+ edigits = 0
+ total = 0
+ if text == b'@':
+ text = b''
+ if text:
+ if text == b'.':
+ return Name([b''])
+ for c in bytearray(text):
+ byte_ = struct.pack('!B', c)
+ if escaping:
+ if edigits == 0:
+ if byte_.isdigit():
+ total = int(byte_)
+ edigits += 1
+ else:
+ label += byte_
+ escaping = False
+ else:
+ if not byte_.isdigit():
+ raise BadEscape
+ total *= 10
+ total += int(byte_)
+ edigits += 1
+ if edigits == 3:
+ escaping = False
+ label += struct.pack('!B', total)
+ elif byte_ == b'.':
+ if len(label) == 0:
+ raise EmptyLabel
+ labels.append(label)
+ label = b''
+ elif byte_ == b'\\':
+ escaping = True
+ edigits = 0
+ total = 0
+ else:
+ label += byte_
+ if escaping:
+ raise BadEscape
+ if len(label) > 0:
+ labels.append(label)
+ else:
+ labels.append(b'')
+ if (len(labels) == 0 or labels[-1] != b'') and origin is not None:
+ labels.extend(list(origin.labels))
+ return Name(labels)
+
+
+def from_wire(message, current):
+ """Convert possibly compressed wire format into a Name.
+
+ *message* is a ``binary`` containing an entire DNS message in DNS
+ wire form.
+
+ *current*, an ``int``, is the offset of the beginning of the name
+ from the start of the message
+
+ Raises ``dns.name.BadPointer`` if a compression pointer did not
+ point backwards in the message.
+
+ Raises ``dns.name.BadLabelType`` if an invalid label type was encountered.
+
+ Returns a ``(dns.name.Name, int)`` tuple consisting of the name
+ that was read and the number of bytes of the wire format message
+ which were consumed reading it.
+ """
+
+ if not isinstance(message, binary_type):
+ raise ValueError("input to from_wire() must be a byte string")
+ message = dns.wiredata.maybe_wrap(message)
+ labels = []
+ biggest_pointer = current
+ hops = 0
+ count = message[current]
+ current += 1
+ cused = 1
+ while count != 0:
+ if count < 64:
+ labels.append(message[current: current + count].unwrap())
+ current += count
+ if hops == 0:
+ cused += count
+ elif count >= 192:
+ current = (count & 0x3f) * 256 + message[current]
+ if hops == 0:
+ cused += 1
+ if current >= biggest_pointer:
+ raise BadPointer
+ biggest_pointer = current
+ hops += 1
+ else:
+ raise BadLabelType
+ count = message[current]
+ current += 1
+ if hops == 0:
+ cused += 1
+ labels.append('')
+ return (Name(labels), cused)
diff --git a/openpype/vendor/python/python_2/dns/namedict.py b/openpype/vendor/python/python_2/dns/namedict.py
new file mode 100644
index 0000000000..37a13104e6
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/namedict.py
@@ -0,0 +1,108 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+# Copyright (C) 2016 Coresec Systems AB
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND CORESEC SYSTEMS AB DISCLAIMS ALL
+# WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL CORESEC
+# SYSTEMS AB BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
+# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
+# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
+# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
+# WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS name dictionary"""
+
+import collections
+import dns.name
+from ._compat import xrange
+
+
+class NameDict(collections.MutableMapping):
+ """A dictionary whose keys are dns.name.Name objects.
+
+ In addition to being like a regular Python dictionary, this
+ dictionary can also get the deepest match for a given key.
+ """
+
+ __slots__ = ["max_depth", "max_depth_items", "__store"]
+
+ def __init__(self, *args, **kwargs):
+ super(NameDict, self).__init__()
+ self.__store = dict()
+ #: the maximum depth of the keys that have ever been added
+ self.max_depth = 0
+ #: the number of items of maximum depth
+ self.max_depth_items = 0
+ self.update(dict(*args, **kwargs))
+
+ def __update_max_depth(self, key):
+ if len(key) == self.max_depth:
+ self.max_depth_items = self.max_depth_items + 1
+ elif len(key) > self.max_depth:
+ self.max_depth = len(key)
+ self.max_depth_items = 1
+
+ def __getitem__(self, key):
+ return self.__store[key]
+
+ def __setitem__(self, key, value):
+ if not isinstance(key, dns.name.Name):
+ raise ValueError('NameDict key must be a name')
+ self.__store[key] = value
+ self.__update_max_depth(key)
+
+ def __delitem__(self, key):
+ value = self.__store.pop(key)
+ if len(value) == self.max_depth:
+ self.max_depth_items = self.max_depth_items - 1
+ if self.max_depth_items == 0:
+ self.max_depth = 0
+ for k in self.__store:
+ self.__update_max_depth(k)
+
+ def __iter__(self):
+ return iter(self.__store)
+
+ def __len__(self):
+ return len(self.__store)
+
+ def has_key(self, key):
+ return key in self.__store
+
+ def get_deepest_match(self, name):
+ """Find the deepest match to *fname* in the dictionary.
+
+ The deepest match is the longest name in the dictionary which is
+ a superdomain of *name*. Note that *superdomain* includes matching
+ *name* itself.
+
+ *name*, a ``dns.name.Name``, the name to find.
+
+ Returns a ``(key, value)`` where *key* is the deepest
+ ``dns.name.Name``, and *value* is the value associated with *key*.
+ """
+
+ depth = len(name)
+ if depth > self.max_depth:
+ depth = self.max_depth
+ for i in xrange(-depth, 0):
+ n = dns.name.Name(name[i:])
+ if n in self:
+ return (n, self[n])
+ v = self[dns.name.empty]
+ return (dns.name.empty, v)
diff --git a/openpype/vendor/python/python_2/dns/node.py b/openpype/vendor/python/python_2/dns/node.py
new file mode 100644
index 0000000000..8a7f19f523
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/node.py
@@ -0,0 +1,182 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS nodes. A node is a set of rdatasets."""
+
+from io import StringIO
+
+import dns.rdataset
+import dns.rdatatype
+import dns.renderer
+
+
+class Node(object):
+
+ """A Node is a set of rdatasets."""
+
+ __slots__ = ['rdatasets']
+
+ def __init__(self):
+ #: the set of rdatsets, represented as a list.
+ self.rdatasets = []
+
+ def to_text(self, name, **kw):
+ """Convert a node to text format.
+
+ Each rdataset at the node is printed. Any keyword arguments
+ to this method are passed on to the rdataset's to_text() method.
+
+ *name*, a ``dns.name.Name`` or ``text``, the owner name of the rdatasets.
+
+ Returns a ``text``.
+ """
+
+ s = StringIO()
+ for rds in self.rdatasets:
+ if len(rds) > 0:
+ s.write(rds.to_text(name, **kw))
+ s.write(u'\n')
+ return s.getvalue()[:-1]
+
+ def __repr__(self):
+ return ''
+
+ def __eq__(self, other):
+ #
+ # This is inefficient. Good thing we don't need to do it much.
+ #
+ for rd in self.rdatasets:
+ if rd not in other.rdatasets:
+ return False
+ for rd in other.rdatasets:
+ if rd not in self.rdatasets:
+ return False
+ return True
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+ def __len__(self):
+ return len(self.rdatasets)
+
+ def __iter__(self):
+ return iter(self.rdatasets)
+
+ def find_rdataset(self, rdclass, rdtype, covers=dns.rdatatype.NONE,
+ create=False):
+ """Find an rdataset matching the specified properties in the
+ current node.
+
+ *rdclass*, an ``int``, the class of the rdataset.
+
+ *rdtype*, an ``int``, the type of the rdataset.
+
+ *covers*, an ``int``, the covered type. Usually this value is
+ dns.rdatatype.NONE, but if the rdtype is dns.rdatatype.SIG or
+ dns.rdatatype.RRSIG, then the covers value will be the rdata
+ type the SIG/RRSIG covers. The library treats the SIG and RRSIG
+ types as if they were a family of
+ types, e.g. RRSIG(A), RRSIG(NS), RRSIG(SOA). This makes RRSIGs much
+ easier to work with than if RRSIGs covering different rdata
+ types were aggregated into a single RRSIG rdataset.
+
+ *create*, a ``bool``. If True, create the rdataset if it is not found.
+
+ Raises ``KeyError`` if an rdataset of the desired type and class does
+ not exist and *create* is not ``True``.
+
+ Returns a ``dns.rdataset.Rdataset``.
+ """
+
+ for rds in self.rdatasets:
+ if rds.match(rdclass, rdtype, covers):
+ return rds
+ if not create:
+ raise KeyError
+ rds = dns.rdataset.Rdataset(rdclass, rdtype)
+ self.rdatasets.append(rds)
+ return rds
+
+ def get_rdataset(self, rdclass, rdtype, covers=dns.rdatatype.NONE,
+ create=False):
+ """Get an rdataset matching the specified properties in the
+ current node.
+
+ None is returned if an rdataset of the specified type and
+ class does not exist and *create* is not ``True``.
+
+ *rdclass*, an ``int``, the class of the rdataset.
+
+ *rdtype*, an ``int``, the type of the rdataset.
+
+ *covers*, an ``int``, the covered type. Usually this value is
+ dns.rdatatype.NONE, but if the rdtype is dns.rdatatype.SIG or
+ dns.rdatatype.RRSIG, then the covers value will be the rdata
+ type the SIG/RRSIG covers. The library treats the SIG and RRSIG
+ types as if they were a family of
+ types, e.g. RRSIG(A), RRSIG(NS), RRSIG(SOA). This makes RRSIGs much
+ easier to work with than if RRSIGs covering different rdata
+ types were aggregated into a single RRSIG rdataset.
+
+ *create*, a ``bool``. If True, create the rdataset if it is not found.
+
+ Returns a ``dns.rdataset.Rdataset`` or ``None``.
+ """
+
+ try:
+ rds = self.find_rdataset(rdclass, rdtype, covers, create)
+ except KeyError:
+ rds = None
+ return rds
+
+ def delete_rdataset(self, rdclass, rdtype, covers=dns.rdatatype.NONE):
+ """Delete the rdataset matching the specified properties in the
+ current node.
+
+ If a matching rdataset does not exist, it is not an error.
+
+ *rdclass*, an ``int``, the class of the rdataset.
+
+ *rdtype*, an ``int``, the type of the rdataset.
+
+ *covers*, an ``int``, the covered type.
+ """
+
+ rds = self.get_rdataset(rdclass, rdtype, covers)
+ if rds is not None:
+ self.rdatasets.remove(rds)
+
+ def replace_rdataset(self, replacement):
+ """Replace an rdataset.
+
+ It is not an error if there is no rdataset matching *replacement*.
+
+ Ownership of the *replacement* object is transferred to the node;
+ in other words, this method does not store a copy of *replacement*
+ at the node, it stores *replacement* itself.
+
+ *replacement*, a ``dns.rdataset.Rdataset``.
+
+ Raises ``ValueError`` if *replacement* is not a
+ ``dns.rdataset.Rdataset``.
+ """
+
+ if not isinstance(replacement, dns.rdataset.Rdataset):
+ raise ValueError('replacement is not an rdataset')
+ self.delete_rdataset(replacement.rdclass, replacement.rdtype,
+ replacement.covers)
+ self.rdatasets.append(replacement)
diff --git a/openpype/vendor/python/python_2/dns/opcode.py b/openpype/vendor/python/python_2/dns/opcode.py
new file mode 100644
index 0000000000..c0735ba47b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/opcode.py
@@ -0,0 +1,119 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Opcodes."""
+
+import dns.exception
+
+#: Query
+QUERY = 0
+#: Inverse Query (historical)
+IQUERY = 1
+#: Server Status (unspecified and unimplemented anywhere)
+STATUS = 2
+#: Notify
+NOTIFY = 4
+#: Dynamic Update
+UPDATE = 5
+
+_by_text = {
+ 'QUERY': QUERY,
+ 'IQUERY': IQUERY,
+ 'STATUS': STATUS,
+ 'NOTIFY': NOTIFY,
+ 'UPDATE': UPDATE
+}
+
+# We construct the inverse mapping programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mapping not to be true inverse.
+
+_by_value = {y: x for x, y in _by_text.items()}
+
+
+class UnknownOpcode(dns.exception.DNSException):
+ """An DNS opcode is unknown."""
+
+
+def from_text(text):
+ """Convert text into an opcode.
+
+ *text*, a ``text``, the textual opcode
+
+ Raises ``dns.opcode.UnknownOpcode`` if the opcode is unknown.
+
+ Returns an ``int``.
+ """
+
+ if text.isdigit():
+ value = int(text)
+ if value >= 0 and value <= 15:
+ return value
+ value = _by_text.get(text.upper())
+ if value is None:
+ raise UnknownOpcode
+ return value
+
+
+def from_flags(flags):
+ """Extract an opcode from DNS message flags.
+
+ *flags*, an ``int``, the DNS flags.
+
+ Returns an ``int``.
+ """
+
+ return (flags & 0x7800) >> 11
+
+
+def to_flags(value):
+ """Convert an opcode to a value suitable for ORing into DNS message
+ flags.
+
+ *value*, an ``int``, the DNS opcode value.
+
+ Returns an ``int``.
+ """
+
+ return (value << 11) & 0x7800
+
+
+def to_text(value):
+ """Convert an opcode to text.
+
+ *value*, an ``int`` the opcode value,
+
+ Raises ``dns.opcode.UnknownOpcode`` if the opcode is unknown.
+
+ Returns a ``text``.
+ """
+
+ text = _by_value.get(value)
+ if text is None:
+ text = str(value)
+ return text
+
+
+def is_update(flags):
+ """Is the opcode in flags UPDATE?
+
+ *flags*, an ``int``, the DNS message flags.
+
+ Returns a ``bool``.
+ """
+
+ return from_flags(flags) == UPDATE
diff --git a/openpype/vendor/python/python_2/dns/py.typed b/openpype/vendor/python/python_2/dns/py.typed
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/openpype/vendor/python/python_2/dns/query.py b/openpype/vendor/python/python_2/dns/query.py
new file mode 100644
index 0000000000..c0c517ccd4
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/query.py
@@ -0,0 +1,683 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Talk to a DNS server."""
+
+from __future__ import generators
+
+import errno
+import select
+import socket
+import struct
+import sys
+import time
+
+import dns.exception
+import dns.inet
+import dns.name
+import dns.message
+import dns.rcode
+import dns.rdataclass
+import dns.rdatatype
+from ._compat import long, string_types, PY3
+
+if PY3:
+ select_error = OSError
+else:
+ select_error = select.error
+
+# Function used to create a socket. Can be overridden if needed in special
+# situations.
+socket_factory = socket.socket
+
+class UnexpectedSource(dns.exception.DNSException):
+ """A DNS query response came from an unexpected address or port."""
+
+
+class BadResponse(dns.exception.FormError):
+ """A DNS query response does not respond to the question asked."""
+
+
+class TransferError(dns.exception.DNSException):
+ """A zone transfer response got a non-zero rcode."""
+
+ def __init__(self, rcode):
+ message = 'Zone transfer error: %s' % dns.rcode.to_text(rcode)
+ super(TransferError, self).__init__(message)
+ self.rcode = rcode
+
+
+def _compute_expiration(timeout):
+ if timeout is None:
+ return None
+ else:
+ return time.time() + timeout
+
+# This module can use either poll() or select() as the "polling backend".
+#
+# A backend function takes an fd, bools for readability, writablity, and
+# error detection, and a timeout.
+
+def _poll_for(fd, readable, writable, error, timeout):
+ """Poll polling backend."""
+
+ event_mask = 0
+ if readable:
+ event_mask |= select.POLLIN
+ if writable:
+ event_mask |= select.POLLOUT
+ if error:
+ event_mask |= select.POLLERR
+
+ pollable = select.poll()
+ pollable.register(fd, event_mask)
+
+ if timeout:
+ event_list = pollable.poll(long(timeout * 1000))
+ else:
+ event_list = pollable.poll()
+
+ return bool(event_list)
+
+
+def _select_for(fd, readable, writable, error, timeout):
+ """Select polling backend."""
+
+ rset, wset, xset = [], [], []
+
+ if readable:
+ rset = [fd]
+ if writable:
+ wset = [fd]
+ if error:
+ xset = [fd]
+
+ if timeout is None:
+ (rcount, wcount, xcount) = select.select(rset, wset, xset)
+ else:
+ (rcount, wcount, xcount) = select.select(rset, wset, xset, timeout)
+
+ return bool((rcount or wcount or xcount))
+
+
+def _wait_for(fd, readable, writable, error, expiration):
+ # Use the selected polling backend to wait for any of the specified
+ # events. An "expiration" absolute time is converted into a relative
+ # timeout.
+
+ done = False
+ while not done:
+ if expiration is None:
+ timeout = None
+ else:
+ timeout = expiration - time.time()
+ if timeout <= 0.0:
+ raise dns.exception.Timeout
+ try:
+ if not _polling_backend(fd, readable, writable, error, timeout):
+ raise dns.exception.Timeout
+ except select_error as e:
+ if e.args[0] != errno.EINTR:
+ raise e
+ done = True
+
+
+def _set_polling_backend(fn):
+ # Internal API. Do not use.
+
+ global _polling_backend
+
+ _polling_backend = fn
+
+if hasattr(select, 'poll'):
+ # Prefer poll() on platforms that support it because it has no
+ # limits on the maximum value of a file descriptor (plus it will
+ # be more efficient for high values).
+ _polling_backend = _poll_for
+else:
+ _polling_backend = _select_for
+
+
+def _wait_for_readable(s, expiration):
+ _wait_for(s, True, False, True, expiration)
+
+
+def _wait_for_writable(s, expiration):
+ _wait_for(s, False, True, True, expiration)
+
+
+def _addresses_equal(af, a1, a2):
+ # Convert the first value of the tuple, which is a textual format
+ # address into binary form, so that we are not confused by different
+ # textual representations of the same address
+ try:
+ n1 = dns.inet.inet_pton(af, a1[0])
+ n2 = dns.inet.inet_pton(af, a2[0])
+ except dns.exception.SyntaxError:
+ return False
+ return n1 == n2 and a1[1:] == a2[1:]
+
+
+def _destination_and_source(af, where, port, source, source_port):
+ # Apply defaults and compute destination and source tuples
+ # suitable for use in connect(), sendto(), or bind().
+ if af is None:
+ try:
+ af = dns.inet.af_for_address(where)
+ except Exception:
+ af = dns.inet.AF_INET
+ if af == dns.inet.AF_INET:
+ destination = (where, port)
+ if source is not None or source_port != 0:
+ if source is None:
+ source = '0.0.0.0'
+ source = (source, source_port)
+ elif af == dns.inet.AF_INET6:
+ destination = (where, port, 0, 0)
+ if source is not None or source_port != 0:
+ if source is None:
+ source = '::'
+ source = (source, source_port, 0, 0)
+ return (af, destination, source)
+
+
+def send_udp(sock, what, destination, expiration=None):
+ """Send a DNS message to the specified UDP socket.
+
+ *sock*, a ``socket``.
+
+ *what*, a ``binary`` or ``dns.message.Message``, the message to send.
+
+ *destination*, a destination tuple appropriate for the address family
+ of the socket, specifying where to send the query.
+
+ *expiration*, a ``float`` or ``None``, the absolute time at which
+ a timeout exception should be raised. If ``None``, no timeout will
+ occur.
+
+ Returns an ``(int, float)`` tuple of bytes sent and the sent time.
+ """
+
+ if isinstance(what, dns.message.Message):
+ what = what.to_wire()
+ _wait_for_writable(sock, expiration)
+ sent_time = time.time()
+ n = sock.sendto(what, destination)
+ return (n, sent_time)
+
+
+def receive_udp(sock, destination, expiration=None,
+ ignore_unexpected=False, one_rr_per_rrset=False,
+ keyring=None, request_mac=b'', ignore_trailing=False):
+ """Read a DNS message from a UDP socket.
+
+ *sock*, a ``socket``.
+
+ *destination*, a destination tuple appropriate for the address family
+ of the socket, specifying where the associated query was sent.
+
+ *expiration*, a ``float`` or ``None``, the absolute time at which
+ a timeout exception should be raised. If ``None``, no timeout will
+ occur.
+
+ *ignore_unexpected*, a ``bool``. If ``True``, ignore responses from
+ unexpected sources.
+
+ *one_rr_per_rrset*, a ``bool``. If ``True``, put each RR into its own
+ RRset.
+
+ *keyring*, a ``dict``, the keyring to use for TSIG.
+
+ *request_mac*, a ``binary``, the MAC of the request (for TSIG).
+
+ *ignore_trailing*, a ``bool``. If ``True``, ignore trailing
+ junk at end of the received message.
+
+ Raises if the message is malformed, if network errors occur, of if
+ there is a timeout.
+
+ Returns a ``dns.message.Message`` object.
+ """
+
+ wire = b''
+ while 1:
+ _wait_for_readable(sock, expiration)
+ (wire, from_address) = sock.recvfrom(65535)
+ if _addresses_equal(sock.family, from_address, destination) or \
+ (dns.inet.is_multicast(destination[0]) and
+ from_address[1:] == destination[1:]):
+ break
+ if not ignore_unexpected:
+ raise UnexpectedSource('got a response from '
+ '%s instead of %s' % (from_address,
+ destination))
+ received_time = time.time()
+ r = dns.message.from_wire(wire, keyring=keyring, request_mac=request_mac,
+ one_rr_per_rrset=one_rr_per_rrset,
+ ignore_trailing=ignore_trailing)
+ return (r, received_time)
+
+def udp(q, where, timeout=None, port=53, af=None, source=None, source_port=0,
+ ignore_unexpected=False, one_rr_per_rrset=False, ignore_trailing=False):
+ """Return the response obtained after sending a query via UDP.
+
+ *q*, a ``dns.message.Message``, the query to send
+
+ *where*, a ``text`` containing an IPv4 or IPv6 address, where
+ to send the message.
+
+ *timeout*, a ``float`` or ``None``, the number of seconds to wait before the
+ query times out. If ``None``, the default, wait forever.
+
+ *port*, an ``int``, the port send the message to. The default is 53.
+
+ *af*, an ``int``, the address family to use. The default is ``None``,
+ which causes the address family to use to be inferred from the form of
+ *where*. If the inference attempt fails, AF_INET is used. This
+ parameter is historical; you need never set it.
+
+ *source*, a ``text`` containing an IPv4 or IPv6 address, specifying
+ the source address. The default is the wildcard address.
+
+ *source_port*, an ``int``, the port from which to send the message.
+ The default is 0.
+
+ *ignore_unexpected*, a ``bool``. If ``True``, ignore responses from
+ unexpected sources.
+
+ *one_rr_per_rrset*, a ``bool``. If ``True``, put each RR into its own
+ RRset.
+
+ *ignore_trailing*, a ``bool``. If ``True``, ignore trailing
+ junk at end of the received message.
+
+ Returns a ``dns.message.Message``.
+ """
+
+ wire = q.to_wire()
+ (af, destination, source) = _destination_and_source(af, where, port,
+ source, source_port)
+ s = socket_factory(af, socket.SOCK_DGRAM, 0)
+ received_time = None
+ sent_time = None
+ try:
+ expiration = _compute_expiration(timeout)
+ s.setblocking(0)
+ if source is not None:
+ s.bind(source)
+ (_, sent_time) = send_udp(s, wire, destination, expiration)
+ (r, received_time) = receive_udp(s, destination, expiration,
+ ignore_unexpected, one_rr_per_rrset,
+ q.keyring, q.mac, ignore_trailing)
+ finally:
+ if sent_time is None or received_time is None:
+ response_time = 0
+ else:
+ response_time = received_time - sent_time
+ s.close()
+ r.time = response_time
+ if not q.is_response(r):
+ raise BadResponse
+ return r
+
+
+def _net_read(sock, count, expiration):
+ """Read the specified number of bytes from sock. Keep trying until we
+ either get the desired amount, or we hit EOF.
+ A Timeout exception will be raised if the operation is not completed
+ by the expiration time.
+ """
+ s = b''
+ while count > 0:
+ _wait_for_readable(sock, expiration)
+ n = sock.recv(count)
+ if n == b'':
+ raise EOFError
+ count = count - len(n)
+ s = s + n
+ return s
+
+
+def _net_write(sock, data, expiration):
+ """Write the specified data to the socket.
+ A Timeout exception will be raised if the operation is not completed
+ by the expiration time.
+ """
+ current = 0
+ l = len(data)
+ while current < l:
+ _wait_for_writable(sock, expiration)
+ current += sock.send(data[current:])
+
+
+def send_tcp(sock, what, expiration=None):
+ """Send a DNS message to the specified TCP socket.
+
+ *sock*, a ``socket``.
+
+ *what*, a ``binary`` or ``dns.message.Message``, the message to send.
+
+ *expiration*, a ``float`` or ``None``, the absolute time at which
+ a timeout exception should be raised. If ``None``, no timeout will
+ occur.
+
+ Returns an ``(int, float)`` tuple of bytes sent and the sent time.
+ """
+
+ if isinstance(what, dns.message.Message):
+ what = what.to_wire()
+ l = len(what)
+ # copying the wire into tcpmsg is inefficient, but lets us
+ # avoid writev() or doing a short write that would get pushed
+ # onto the net
+ tcpmsg = struct.pack("!H", l) + what
+ _wait_for_writable(sock, expiration)
+ sent_time = time.time()
+ _net_write(sock, tcpmsg, expiration)
+ return (len(tcpmsg), sent_time)
+
+def receive_tcp(sock, expiration=None, one_rr_per_rrset=False,
+ keyring=None, request_mac=b'', ignore_trailing=False):
+ """Read a DNS message from a TCP socket.
+
+ *sock*, a ``socket``.
+
+ *expiration*, a ``float`` or ``None``, the absolute time at which
+ a timeout exception should be raised. If ``None``, no timeout will
+ occur.
+
+ *one_rr_per_rrset*, a ``bool``. If ``True``, put each RR into its own
+ RRset.
+
+ *keyring*, a ``dict``, the keyring to use for TSIG.
+
+ *request_mac*, a ``binary``, the MAC of the request (for TSIG).
+
+ *ignore_trailing*, a ``bool``. If ``True``, ignore trailing
+ junk at end of the received message.
+
+ Raises if the message is malformed, if network errors occur, of if
+ there is a timeout.
+
+ Returns a ``dns.message.Message`` object.
+ """
+
+ ldata = _net_read(sock, 2, expiration)
+ (l,) = struct.unpack("!H", ldata)
+ wire = _net_read(sock, l, expiration)
+ received_time = time.time()
+ r = dns.message.from_wire(wire, keyring=keyring, request_mac=request_mac,
+ one_rr_per_rrset=one_rr_per_rrset,
+ ignore_trailing=ignore_trailing)
+ return (r, received_time)
+
+def _connect(s, address):
+ try:
+ s.connect(address)
+ except socket.error:
+ (ty, v) = sys.exc_info()[:2]
+
+ if hasattr(v, 'errno'):
+ v_err = v.errno
+ else:
+ v_err = v[0]
+ if v_err not in [errno.EINPROGRESS, errno.EWOULDBLOCK, errno.EALREADY]:
+ raise v
+
+
+def tcp(q, where, timeout=None, port=53, af=None, source=None, source_port=0,
+ one_rr_per_rrset=False, ignore_trailing=False):
+ """Return the response obtained after sending a query via TCP.
+
+ *q*, a ``dns.message.Message``, the query to send
+
+ *where*, a ``text`` containing an IPv4 or IPv6 address, where
+ to send the message.
+
+ *timeout*, a ``float`` or ``None``, the number of seconds to wait before the
+ query times out. If ``None``, the default, wait forever.
+
+ *port*, an ``int``, the port send the message to. The default is 53.
+
+ *af*, an ``int``, the address family to use. The default is ``None``,
+ which causes the address family to use to be inferred from the form of
+ *where*. If the inference attempt fails, AF_INET is used. This
+ parameter is historical; you need never set it.
+
+ *source*, a ``text`` containing an IPv4 or IPv6 address, specifying
+ the source address. The default is the wildcard address.
+
+ *source_port*, an ``int``, the port from which to send the message.
+ The default is 0.
+
+ *one_rr_per_rrset*, a ``bool``. If ``True``, put each RR into its own
+ RRset.
+
+ *ignore_trailing*, a ``bool``. If ``True``, ignore trailing
+ junk at end of the received message.
+
+ Returns a ``dns.message.Message``.
+ """
+
+ wire = q.to_wire()
+ (af, destination, source) = _destination_and_source(af, where, port,
+ source, source_port)
+ s = socket_factory(af, socket.SOCK_STREAM, 0)
+ begin_time = None
+ received_time = None
+ try:
+ expiration = _compute_expiration(timeout)
+ s.setblocking(0)
+ begin_time = time.time()
+ if source is not None:
+ s.bind(source)
+ _connect(s, destination)
+ send_tcp(s, wire, expiration)
+ (r, received_time) = receive_tcp(s, expiration, one_rr_per_rrset,
+ q.keyring, q.mac, ignore_trailing)
+ finally:
+ if begin_time is None or received_time is None:
+ response_time = 0
+ else:
+ response_time = received_time - begin_time
+ s.close()
+ r.time = response_time
+ if not q.is_response(r):
+ raise BadResponse
+ return r
+
+
+def xfr(where, zone, rdtype=dns.rdatatype.AXFR, rdclass=dns.rdataclass.IN,
+ timeout=None, port=53, keyring=None, keyname=None, relativize=True,
+ af=None, lifetime=None, source=None, source_port=0, serial=0,
+ use_udp=False, keyalgorithm=dns.tsig.default_algorithm):
+ """Return a generator for the responses to a zone transfer.
+
+ *where*. If the inference attempt fails, AF_INET is used. This
+ parameter is historical; you need never set it.
+
+ *zone*, a ``dns.name.Name`` or ``text``, the name of the zone to transfer.
+
+ *rdtype*, an ``int`` or ``text``, the type of zone transfer. The
+ default is ``dns.rdatatype.AXFR``. ``dns.rdatatype.IXFR`` can be
+ used to do an incremental transfer instead.
+
+ *rdclass*, an ``int`` or ``text``, the class of the zone transfer.
+ The default is ``dns.rdataclass.IN``.
+
+ *timeout*, a ``float``, the number of seconds to wait for each
+ response message. If None, the default, wait forever.
+
+ *port*, an ``int``, the port send the message to. The default is 53.
+
+ *keyring*, a ``dict``, the keyring to use for TSIG.
+
+ *keyname*, a ``dns.name.Name`` or ``text``, the name of the TSIG
+ key to use.
+
+ *relativize*, a ``bool``. If ``True``, all names in the zone will be
+ relativized to the zone origin. It is essential that the
+ relativize setting matches the one specified to
+ ``dns.zone.from_xfr()`` if using this generator to make a zone.
+
+ *af*, an ``int``, the address family to use. The default is ``None``,
+ which causes the address family to use to be inferred from the form of
+ *where*. If the inference attempt fails, AF_INET is used. This
+ parameter is historical; you need never set it.
+
+ *lifetime*, a ``float``, the total number of seconds to spend
+ doing the transfer. If ``None``, the default, then there is no
+ limit on the time the transfer may take.
+
+ *source*, a ``text`` containing an IPv4 or IPv6 address, specifying
+ the source address. The default is the wildcard address.
+
+ *source_port*, an ``int``, the port from which to send the message.
+ The default is 0.
+
+ *serial*, an ``int``, the SOA serial number to use as the base for
+ an IXFR diff sequence (only meaningful if *rdtype* is
+ ``dns.rdatatype.IXFR``).
+
+ *use_udp*, a ``bool``. If ``True``, use UDP (only meaningful for IXFR).
+
+ *keyalgorithm*, a ``dns.name.Name`` or ``text``, the TSIG algorithm to use.
+
+ Raises on errors, and so does the generator.
+
+ Returns a generator of ``dns.message.Message`` objects.
+ """
+
+ if isinstance(zone, string_types):
+ zone = dns.name.from_text(zone)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ q = dns.message.make_query(zone, rdtype, rdclass)
+ if rdtype == dns.rdatatype.IXFR:
+ rrset = dns.rrset.from_text(zone, 0, 'IN', 'SOA',
+ '. . %u 0 0 0 0' % serial)
+ q.authority.append(rrset)
+ if keyring is not None:
+ q.use_tsig(keyring, keyname, algorithm=keyalgorithm)
+ wire = q.to_wire()
+ (af, destination, source) = _destination_and_source(af, where, port,
+ source, source_port)
+ if use_udp:
+ if rdtype != dns.rdatatype.IXFR:
+ raise ValueError('cannot do a UDP AXFR')
+ s = socket_factory(af, socket.SOCK_DGRAM, 0)
+ else:
+ s = socket_factory(af, socket.SOCK_STREAM, 0)
+ s.setblocking(0)
+ if source is not None:
+ s.bind(source)
+ expiration = _compute_expiration(lifetime)
+ _connect(s, destination)
+ l = len(wire)
+ if use_udp:
+ _wait_for_writable(s, expiration)
+ s.send(wire)
+ else:
+ tcpmsg = struct.pack("!H", l) + wire
+ _net_write(s, tcpmsg, expiration)
+ done = False
+ delete_mode = True
+ expecting_SOA = False
+ soa_rrset = None
+ if relativize:
+ origin = zone
+ oname = dns.name.empty
+ else:
+ origin = None
+ oname = zone
+ tsig_ctx = None
+ first = True
+ while not done:
+ mexpiration = _compute_expiration(timeout)
+ if mexpiration is None or mexpiration > expiration:
+ mexpiration = expiration
+ if use_udp:
+ _wait_for_readable(s, expiration)
+ (wire, from_address) = s.recvfrom(65535)
+ else:
+ ldata = _net_read(s, 2, mexpiration)
+ (l,) = struct.unpack("!H", ldata)
+ wire = _net_read(s, l, mexpiration)
+ is_ixfr = (rdtype == dns.rdatatype.IXFR)
+ r = dns.message.from_wire(wire, keyring=q.keyring, request_mac=q.mac,
+ xfr=True, origin=origin, tsig_ctx=tsig_ctx,
+ multi=True, first=first,
+ one_rr_per_rrset=is_ixfr)
+ rcode = r.rcode()
+ if rcode != dns.rcode.NOERROR:
+ raise TransferError(rcode)
+ tsig_ctx = r.tsig_ctx
+ first = False
+ answer_index = 0
+ if soa_rrset is None:
+ if not r.answer or r.answer[0].name != oname:
+ raise dns.exception.FormError(
+ "No answer or RRset not for qname")
+ rrset = r.answer[0]
+ if rrset.rdtype != dns.rdatatype.SOA:
+ raise dns.exception.FormError("first RRset is not an SOA")
+ answer_index = 1
+ soa_rrset = rrset.copy()
+ if rdtype == dns.rdatatype.IXFR:
+ if soa_rrset[0].serial <= serial:
+ #
+ # We're already up-to-date.
+ #
+ done = True
+ else:
+ expecting_SOA = True
+ #
+ # Process SOAs in the answer section (other than the initial
+ # SOA in the first message).
+ #
+ for rrset in r.answer[answer_index:]:
+ if done:
+ raise dns.exception.FormError("answers after final SOA")
+ if rrset.rdtype == dns.rdatatype.SOA and rrset.name == oname:
+ if expecting_SOA:
+ if rrset[0].serial != serial:
+ raise dns.exception.FormError(
+ "IXFR base serial mismatch")
+ expecting_SOA = False
+ elif rdtype == dns.rdatatype.IXFR:
+ delete_mode = not delete_mode
+ #
+ # If this SOA RRset is equal to the first we saw then we're
+ # finished. If this is an IXFR we also check that we're seeing
+ # the record in the expected part of the response.
+ #
+ if rrset == soa_rrset and \
+ (rdtype == dns.rdatatype.AXFR or
+ (rdtype == dns.rdatatype.IXFR and delete_mode)):
+ done = True
+ elif expecting_SOA:
+ #
+ # We made an IXFR request and are expecting another
+ # SOA RR, but saw something else, so this must be an
+ # AXFR response.
+ #
+ rdtype = dns.rdatatype.AXFR
+ expecting_SOA = False
+ if done and q.keyring and not r.had_tsig:
+ raise dns.exception.FormError("missing TSIG")
+ yield r
+ s.close()
diff --git a/openpype/vendor/python/python_2/dns/rcode.py b/openpype/vendor/python/python_2/dns/rcode.py
new file mode 100644
index 0000000000..5191e1b18c
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rcode.py
@@ -0,0 +1,144 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Result Codes."""
+
+import dns.exception
+from ._compat import long
+
+#: No error
+NOERROR = 0
+#: Form error
+FORMERR = 1
+#: Server failure
+SERVFAIL = 2
+#: Name does not exist ("Name Error" in RFC 1025 terminology).
+NXDOMAIN = 3
+#: Not implemented
+NOTIMP = 4
+#: Refused
+REFUSED = 5
+#: Name exists.
+YXDOMAIN = 6
+#: RRset exists.
+YXRRSET = 7
+#: RRset does not exist.
+NXRRSET = 8
+#: Not authoritative.
+NOTAUTH = 9
+#: Name not in zone.
+NOTZONE = 10
+#: Bad EDNS version.
+BADVERS = 16
+
+_by_text = {
+ 'NOERROR': NOERROR,
+ 'FORMERR': FORMERR,
+ 'SERVFAIL': SERVFAIL,
+ 'NXDOMAIN': NXDOMAIN,
+ 'NOTIMP': NOTIMP,
+ 'REFUSED': REFUSED,
+ 'YXDOMAIN': YXDOMAIN,
+ 'YXRRSET': YXRRSET,
+ 'NXRRSET': NXRRSET,
+ 'NOTAUTH': NOTAUTH,
+ 'NOTZONE': NOTZONE,
+ 'BADVERS': BADVERS
+}
+
+# We construct the inverse mapping programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mapping not to be a true inverse.
+
+_by_value = {y: x for x, y in _by_text.items()}
+
+
+class UnknownRcode(dns.exception.DNSException):
+ """A DNS rcode is unknown."""
+
+
+def from_text(text):
+ """Convert text into an rcode.
+
+ *text*, a ``text``, the textual rcode or an integer in textual form.
+
+ Raises ``dns.rcode.UnknownRcode`` if the rcode mnemonic is unknown.
+
+ Returns an ``int``.
+ """
+
+ if text.isdigit():
+ v = int(text)
+ if v >= 0 and v <= 4095:
+ return v
+ v = _by_text.get(text.upper())
+ if v is None:
+ raise UnknownRcode
+ return v
+
+
+def from_flags(flags, ednsflags):
+ """Return the rcode value encoded by flags and ednsflags.
+
+ *flags*, an ``int``, the DNS flags field.
+
+ *ednsflags*, an ``int``, the EDNS flags field.
+
+ Raises ``ValueError`` if rcode is < 0 or > 4095
+
+ Returns an ``int``.
+ """
+
+ value = (flags & 0x000f) | ((ednsflags >> 20) & 0xff0)
+ if value < 0 or value > 4095:
+ raise ValueError('rcode must be >= 0 and <= 4095')
+ return value
+
+
+def to_flags(value):
+ """Return a (flags, ednsflags) tuple which encodes the rcode.
+
+ *value*, an ``int``, the rcode.
+
+ Raises ``ValueError`` if rcode is < 0 or > 4095.
+
+ Returns an ``(int, int)`` tuple.
+ """
+
+ if value < 0 or value > 4095:
+ raise ValueError('rcode must be >= 0 and <= 4095')
+ v = value & 0xf
+ ev = long(value & 0xff0) << 20
+ return (v, ev)
+
+
+def to_text(value):
+ """Convert rcode into text.
+
+ *value*, and ``int``, the rcode.
+
+ Raises ``ValueError`` if rcode is < 0 or > 4095.
+
+ Returns a ``text``.
+ """
+
+ if value < 0 or value > 4095:
+ raise ValueError('rcode must be >= 0 and <= 4095')
+ text = _by_value.get(value)
+ if text is None:
+ text = str(value)
+ return text
diff --git a/openpype/vendor/python/python_2/dns/rdata.py b/openpype/vendor/python/python_2/dns/rdata.py
new file mode 100644
index 0000000000..ea1971dc5f
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdata.py
@@ -0,0 +1,456 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS rdata."""
+
+from io import BytesIO
+import base64
+import binascii
+
+import dns.exception
+import dns.name
+import dns.rdataclass
+import dns.rdatatype
+import dns.tokenizer
+import dns.wiredata
+from ._compat import xrange, string_types, text_type
+
+try:
+ import threading as _threading
+except ImportError:
+ import dummy_threading as _threading
+
+_hex_chunksize = 32
+
+
+def _hexify(data, chunksize=_hex_chunksize):
+ """Convert a binary string into its hex encoding, broken up into chunks
+ of chunksize characters separated by a space.
+ """
+
+ line = binascii.hexlify(data)
+ return b' '.join([line[i:i + chunksize]
+ for i
+ in range(0, len(line), chunksize)]).decode()
+
+_base64_chunksize = 32
+
+
+def _base64ify(data, chunksize=_base64_chunksize):
+ """Convert a binary string into its base64 encoding, broken up into chunks
+ of chunksize characters separated by a space.
+ """
+
+ line = base64.b64encode(data)
+ return b' '.join([line[i:i + chunksize]
+ for i
+ in range(0, len(line), chunksize)]).decode()
+
+__escaped = bytearray(b'"\\')
+
+def _escapify(qstring):
+ """Escape the characters in a quoted string which need it."""
+
+ if isinstance(qstring, text_type):
+ qstring = qstring.encode()
+ if not isinstance(qstring, bytearray):
+ qstring = bytearray(qstring)
+
+ text = ''
+ for c in qstring:
+ if c in __escaped:
+ text += '\\' + chr(c)
+ elif c >= 0x20 and c < 0x7F:
+ text += chr(c)
+ else:
+ text += '\\%03d' % c
+ return text
+
+
+def _truncate_bitmap(what):
+ """Determine the index of greatest byte that isn't all zeros, and
+ return the bitmap that contains all the bytes less than that index.
+ """
+
+ for i in xrange(len(what) - 1, -1, -1):
+ if what[i] != 0:
+ return what[0: i + 1]
+ return what[0:1]
+
+
+class Rdata(object):
+ """Base class for all DNS rdata types."""
+
+ __slots__ = ['rdclass', 'rdtype']
+
+ def __init__(self, rdclass, rdtype):
+ """Initialize an rdata.
+
+ *rdclass*, an ``int`` is the rdataclass of the Rdata.
+ *rdtype*, an ``int`` is the rdatatype of the Rdata.
+ """
+
+ self.rdclass = rdclass
+ self.rdtype = rdtype
+
+ def covers(self):
+ """Return the type a Rdata covers.
+
+ DNS SIG/RRSIG rdatas apply to a specific type; this type is
+ returned by the covers() function. If the rdata type is not
+ SIG or RRSIG, dns.rdatatype.NONE is returned. This is useful when
+ creating rdatasets, allowing the rdataset to contain only RRSIGs
+ of a particular type, e.g. RRSIG(NS).
+
+ Returns an ``int``.
+ """
+
+ return dns.rdatatype.NONE
+
+ def extended_rdatatype(self):
+ """Return a 32-bit type value, the least significant 16 bits of
+ which are the ordinary DNS type, and the upper 16 bits of which are
+ the "covered" type, if any.
+
+ Returns an ``int``.
+ """
+
+ return self.covers() << 16 | self.rdtype
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ """Convert an rdata to text format.
+
+ Returns a ``text``.
+ """
+
+ raise NotImplementedError
+
+ def to_wire(self, file, compress=None, origin=None):
+ """Convert an rdata to wire format.
+
+ Returns a ``binary``.
+ """
+
+ raise NotImplementedError
+
+ def to_digestable(self, origin=None):
+ """Convert rdata to a format suitable for digesting in hashes. This
+ is also the DNSSEC canonical form.
+
+ Returns a ``binary``.
+ """
+
+ f = BytesIO()
+ self.to_wire(f, None, origin)
+ return f.getvalue()
+
+ def validate(self):
+ """Check that the current contents of the rdata's fields are
+ valid.
+
+ If you change an rdata by assigning to its fields,
+ it is a good idea to call validate() when you are done making
+ changes.
+
+ Raises various exceptions if there are problems.
+
+ Returns ``None``.
+ """
+
+ dns.rdata.from_text(self.rdclass, self.rdtype, self.to_text())
+
+ def __repr__(self):
+ covers = self.covers()
+ if covers == dns.rdatatype.NONE:
+ ctext = ''
+ else:
+ ctext = '(' + dns.rdatatype.to_text(covers) + ')'
+ return ''
+
+ def __str__(self):
+ return self.to_text()
+
+ def _cmp(self, other):
+ """Compare an rdata with another rdata of the same rdtype and
+ rdclass.
+
+ Return < 0 if self < other in the DNSSEC ordering, 0 if self
+ == other, and > 0 if self > other.
+
+ """
+ our = self.to_digestable(dns.name.root)
+ their = other.to_digestable(dns.name.root)
+ if our == their:
+ return 0
+ elif our > their:
+ return 1
+ else:
+ return -1
+
+ def __eq__(self, other):
+ if not isinstance(other, Rdata):
+ return False
+ if self.rdclass != other.rdclass or self.rdtype != other.rdtype:
+ return False
+ return self._cmp(other) == 0
+
+ def __ne__(self, other):
+ if not isinstance(other, Rdata):
+ return True
+ if self.rdclass != other.rdclass or self.rdtype != other.rdtype:
+ return True
+ return self._cmp(other) != 0
+
+ def __lt__(self, other):
+ if not isinstance(other, Rdata) or \
+ self.rdclass != other.rdclass or self.rdtype != other.rdtype:
+
+ return NotImplemented
+ return self._cmp(other) < 0
+
+ def __le__(self, other):
+ if not isinstance(other, Rdata) or \
+ self.rdclass != other.rdclass or self.rdtype != other.rdtype:
+ return NotImplemented
+ return self._cmp(other) <= 0
+
+ def __ge__(self, other):
+ if not isinstance(other, Rdata) or \
+ self.rdclass != other.rdclass or self.rdtype != other.rdtype:
+ return NotImplemented
+ return self._cmp(other) >= 0
+
+ def __gt__(self, other):
+ if not isinstance(other, Rdata) or \
+ self.rdclass != other.rdclass or self.rdtype != other.rdtype:
+ return NotImplemented
+ return self._cmp(other) > 0
+
+ def __hash__(self):
+ return hash(self.to_digestable(dns.name.root))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ raise NotImplementedError
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ raise NotImplementedError
+
+ def choose_relativity(self, origin=None, relativize=True):
+ """Convert any domain names in the rdata to the specified
+ relativization.
+ """
+
+class GenericRdata(Rdata):
+
+ """Generic Rdata Class
+
+ This class is used for rdata types for which we have no better
+ implementation. It implements the DNS "unknown RRs" scheme.
+ """
+
+ __slots__ = ['data']
+
+ def __init__(self, rdclass, rdtype, data):
+ super(GenericRdata, self).__init__(rdclass, rdtype)
+ self.data = data
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return r'\# %d ' % len(self.data) + _hexify(self.data)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ token = tok.get()
+ if not token.is_identifier() or token.value != r'\#':
+ raise dns.exception.SyntaxError(
+ r'generic rdata does not start with \#')
+ length = tok.get_int()
+ chunks = []
+ while 1:
+ token = tok.get()
+ if token.is_eol_or_eof():
+ break
+ chunks.append(token.value.encode())
+ hex = b''.join(chunks)
+ data = binascii.unhexlify(hex)
+ if len(data) != length:
+ raise dns.exception.SyntaxError(
+ 'generic rdata hex data has wrong length')
+ return cls(rdclass, rdtype, data)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(self.data)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ return cls(rdclass, rdtype, wire[current: current + rdlen])
+
+_rdata_modules = {}
+_module_prefix = 'dns.rdtypes'
+_import_lock = _threading.Lock()
+
+def get_rdata_class(rdclass, rdtype):
+
+ def import_module(name):
+ with _import_lock:
+ mod = __import__(name)
+ components = name.split('.')
+ for comp in components[1:]:
+ mod = getattr(mod, comp)
+ return mod
+
+ mod = _rdata_modules.get((rdclass, rdtype))
+ rdclass_text = dns.rdataclass.to_text(rdclass)
+ rdtype_text = dns.rdatatype.to_text(rdtype)
+ rdtype_text = rdtype_text.replace('-', '_')
+ if not mod:
+ mod = _rdata_modules.get((dns.rdatatype.ANY, rdtype))
+ if not mod:
+ try:
+ mod = import_module('.'.join([_module_prefix,
+ rdclass_text, rdtype_text]))
+ _rdata_modules[(rdclass, rdtype)] = mod
+ except ImportError:
+ try:
+ mod = import_module('.'.join([_module_prefix,
+ 'ANY', rdtype_text]))
+ _rdata_modules[(dns.rdataclass.ANY, rdtype)] = mod
+ except ImportError:
+ mod = None
+ if mod:
+ cls = getattr(mod, rdtype_text)
+ else:
+ cls = GenericRdata
+ return cls
+
+
+def from_text(rdclass, rdtype, tok, origin=None, relativize=True):
+ """Build an rdata object from text format.
+
+ This function attempts to dynamically load a class which
+ implements the specified rdata class and type. If there is no
+ class-and-type-specific implementation, the GenericRdata class
+ is used.
+
+ Once a class is chosen, its from_text() class method is called
+ with the parameters to this function.
+
+ If *tok* is a ``text``, then a tokenizer is created and the string
+ is used as its input.
+
+ *rdclass*, an ``int``, the rdataclass.
+
+ *rdtype*, an ``int``, the rdatatype.
+
+ *tok*, a ``dns.tokenizer.Tokenizer`` or a ``text``.
+
+ *origin*, a ``dns.name.Name`` (or ``None``), the
+ origin to use for relative names.
+
+ *relativize*, a ``bool``. If true, name will be relativized to
+ the specified origin.
+
+ Returns an instance of the chosen Rdata subclass.
+ """
+
+ if isinstance(tok, string_types):
+ tok = dns.tokenizer.Tokenizer(tok)
+ cls = get_rdata_class(rdclass, rdtype)
+ if cls != GenericRdata:
+ # peek at first token
+ token = tok.get()
+ tok.unget(token)
+ if token.is_identifier() and \
+ token.value == r'\#':
+ #
+ # Known type using the generic syntax. Extract the
+ # wire form from the generic syntax, and then run
+ # from_wire on it.
+ #
+ rdata = GenericRdata.from_text(rdclass, rdtype, tok, origin,
+ relativize)
+ return from_wire(rdclass, rdtype, rdata.data, 0, len(rdata.data),
+ origin)
+ return cls.from_text(rdclass, rdtype, tok, origin, relativize)
+
+
+def from_wire(rdclass, rdtype, wire, current, rdlen, origin=None):
+ """Build an rdata object from wire format
+
+ This function attempts to dynamically load a class which
+ implements the specified rdata class and type. If there is no
+ class-and-type-specific implementation, the GenericRdata class
+ is used.
+
+ Once a class is chosen, its from_wire() class method is called
+ with the parameters to this function.
+
+ *rdclass*, an ``int``, the rdataclass.
+
+ *rdtype*, an ``int``, the rdatatype.
+
+ *wire*, a ``binary``, the wire-format message.
+
+ *current*, an ``int``, the offset in wire of the beginning of
+ the rdata.
+
+ *rdlen*, an ``int``, the length of the wire-format rdata
+
+ *origin*, a ``dns.name.Name`` (or ``None``). If not ``None``,
+ then names will be relativized to this origin.
+
+ Returns an instance of the chosen Rdata subclass.
+ """
+
+ wire = dns.wiredata.maybe_wrap(wire)
+ cls = get_rdata_class(rdclass, rdtype)
+ return cls.from_wire(rdclass, rdtype, wire, current, rdlen, origin)
+
+
+class RdatatypeExists(dns.exception.DNSException):
+ """DNS rdatatype already exists."""
+ supp_kwargs = {'rdclass', 'rdtype'}
+ fmt = "The rdata type with class {rdclass} and rdtype {rdtype} " + \
+ "already exists."
+
+
+def register_type(implementation, rdtype, rdtype_text, is_singleton=False,
+ rdclass=dns.rdataclass.IN):
+ """Dynamically register a module to handle an rdatatype.
+
+ *implementation*, a module implementing the type in the usual dnspython
+ way.
+
+ *rdtype*, an ``int``, the rdatatype to register.
+
+ *rdtype_text*, a ``text``, the textual form of the rdatatype.
+
+ *is_singleton*, a ``bool``, indicating if the type is a singleton (i.e.
+ RRsets of the type can have only one member.)
+
+ *rdclass*, the rdataclass of the type, or ``dns.rdataclass.ANY`` if
+ it applies to all classes.
+ """
+
+ existing_cls = get_rdata_class(rdclass, rdtype)
+ if existing_cls != GenericRdata:
+ raise RdatatypeExists(rdclass=rdclass, rdtype=rdtype)
+ _rdata_modules[(rdclass, rdtype)] = implementation
+ dns.rdatatype.register_type(rdtype, rdtype_text, is_singleton)
diff --git a/openpype/vendor/python/python_2/dns/rdataclass.py b/openpype/vendor/python/python_2/dns/rdataclass.py
new file mode 100644
index 0000000000..b88aa85b7b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdataclass.py
@@ -0,0 +1,122 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Rdata Classes."""
+
+import re
+
+import dns.exception
+
+RESERVED0 = 0
+IN = 1
+CH = 3
+HS = 4
+NONE = 254
+ANY = 255
+
+_by_text = {
+ 'RESERVED0': RESERVED0,
+ 'IN': IN,
+ 'CH': CH,
+ 'HS': HS,
+ 'NONE': NONE,
+ 'ANY': ANY
+}
+
+# We construct the inverse mapping programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mapping not to be true inverse.
+
+_by_value = {y: x for x, y in _by_text.items()}
+
+# Now that we've built the inverse map, we can add class aliases to
+# the _by_text mapping.
+
+_by_text.update({
+ 'INTERNET': IN,
+ 'CHAOS': CH,
+ 'HESIOD': HS
+})
+
+_metaclasses = {
+ NONE: True,
+ ANY: True
+}
+
+_unknown_class_pattern = re.compile('CLASS([0-9]+)$', re.I)
+
+
+class UnknownRdataclass(dns.exception.DNSException):
+ """A DNS class is unknown."""
+
+
+def from_text(text):
+ """Convert text into a DNS rdata class value.
+
+ The input text can be a defined DNS RR class mnemonic or
+ instance of the DNS generic class syntax.
+
+ For example, "IN" and "CLASS1" will both result in a value of 1.
+
+ Raises ``dns.rdatatype.UnknownRdataclass`` if the class is unknown.
+
+ Raises ``ValueError`` if the rdata class value is not >= 0 and <= 65535.
+
+ Returns an ``int``.
+ """
+
+ value = _by_text.get(text.upper())
+ if value is None:
+ match = _unknown_class_pattern.match(text)
+ if match is None:
+ raise UnknownRdataclass
+ value = int(match.group(1))
+ if value < 0 or value > 65535:
+ raise ValueError("class must be between >= 0 and <= 65535")
+ return value
+
+
+def to_text(value):
+ """Convert a DNS rdata type value to text.
+
+ If the value has a known mnemonic, it will be used, otherwise the
+ DNS generic class syntax will be used.
+
+ Raises ``ValueError`` if the rdata class value is not >= 0 and <= 65535.
+
+ Returns a ``str``.
+ """
+
+ if value < 0 or value > 65535:
+ raise ValueError("class must be between >= 0 and <= 65535")
+ text = _by_value.get(value)
+ if text is None:
+ text = 'CLASS' + repr(value)
+ return text
+
+
+def is_metaclass(rdclass):
+ """True if the specified class is a metaclass.
+
+ The currently defined metaclasses are ANY and NONE.
+
+ *rdclass* is an ``int``.
+ """
+
+ if rdclass in _metaclasses:
+ return True
+ return False
diff --git a/openpype/vendor/python/python_2/dns/rdataset.py b/openpype/vendor/python/python_2/dns/rdataset.py
new file mode 100644
index 0000000000..f1afe24198
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdataset.py
@@ -0,0 +1,347 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS rdatasets (an rdataset is a set of rdatas of a given type and class)"""
+
+import random
+from io import StringIO
+import struct
+
+import dns.exception
+import dns.rdatatype
+import dns.rdataclass
+import dns.rdata
+import dns.set
+from ._compat import string_types
+
+# define SimpleSet here for backwards compatibility
+SimpleSet = dns.set.Set
+
+
+class DifferingCovers(dns.exception.DNSException):
+ """An attempt was made to add a DNS SIG/RRSIG whose covered type
+ is not the same as that of the other rdatas in the rdataset."""
+
+
+class IncompatibleTypes(dns.exception.DNSException):
+ """An attempt was made to add DNS RR data of an incompatible type."""
+
+
+class Rdataset(dns.set.Set):
+
+ """A DNS rdataset."""
+
+ __slots__ = ['rdclass', 'rdtype', 'covers', 'ttl']
+
+ def __init__(self, rdclass, rdtype, covers=dns.rdatatype.NONE, ttl=0):
+ """Create a new rdataset of the specified class and type.
+
+ *rdclass*, an ``int``, the rdataclass.
+
+ *rdtype*, an ``int``, the rdatatype.
+
+ *covers*, an ``int``, the covered rdatatype.
+
+ *ttl*, an ``int``, the TTL.
+ """
+
+ super(Rdataset, self).__init__()
+ self.rdclass = rdclass
+ self.rdtype = rdtype
+ self.covers = covers
+ self.ttl = ttl
+
+ def _clone(self):
+ obj = super(Rdataset, self)._clone()
+ obj.rdclass = self.rdclass
+ obj.rdtype = self.rdtype
+ obj.covers = self.covers
+ obj.ttl = self.ttl
+ return obj
+
+ def update_ttl(self, ttl):
+ """Perform TTL minimization.
+
+ Set the TTL of the rdataset to be the lesser of the set's current
+ TTL or the specified TTL. If the set contains no rdatas, set the TTL
+ to the specified TTL.
+
+ *ttl*, an ``int``.
+ """
+
+ if len(self) == 0:
+ self.ttl = ttl
+ elif ttl < self.ttl:
+ self.ttl = ttl
+
+ def add(self, rd, ttl=None):
+ """Add the specified rdata to the rdataset.
+
+ If the optional *ttl* parameter is supplied, then
+ ``self.update_ttl(ttl)`` will be called prior to adding the rdata.
+
+ *rd*, a ``dns.rdata.Rdata``, the rdata
+
+ *ttl*, an ``int``, the TTL.
+
+ Raises ``dns.rdataset.IncompatibleTypes`` if the type and class
+ do not match the type and class of the rdataset.
+
+ Raises ``dns.rdataset.DifferingCovers`` if the type is a signature
+ type and the covered type does not match that of the rdataset.
+ """
+
+ #
+ # If we're adding a signature, do some special handling to
+ # check that the signature covers the same type as the
+ # other rdatas in this rdataset. If this is the first rdata
+ # in the set, initialize the covers field.
+ #
+ if self.rdclass != rd.rdclass or self.rdtype != rd.rdtype:
+ raise IncompatibleTypes
+ if ttl is not None:
+ self.update_ttl(ttl)
+ if self.rdtype == dns.rdatatype.RRSIG or \
+ self.rdtype == dns.rdatatype.SIG:
+ covers = rd.covers()
+ if len(self) == 0 and self.covers == dns.rdatatype.NONE:
+ self.covers = covers
+ elif self.covers != covers:
+ raise DifferingCovers
+ if dns.rdatatype.is_singleton(rd.rdtype) and len(self) > 0:
+ self.clear()
+ super(Rdataset, self).add(rd)
+
+ def union_update(self, other):
+ self.update_ttl(other.ttl)
+ super(Rdataset, self).union_update(other)
+
+ def intersection_update(self, other):
+ self.update_ttl(other.ttl)
+ super(Rdataset, self).intersection_update(other)
+
+ def update(self, other):
+ """Add all rdatas in other to self.
+
+ *other*, a ``dns.rdataset.Rdataset``, the rdataset from which
+ to update.
+ """
+
+ self.update_ttl(other.ttl)
+ super(Rdataset, self).update(other)
+
+ def __repr__(self):
+ if self.covers == 0:
+ ctext = ''
+ else:
+ ctext = '(' + dns.rdatatype.to_text(self.covers) + ')'
+ return ''
+
+ def __str__(self):
+ return self.to_text()
+
+ def __eq__(self, other):
+ if not isinstance(other, Rdataset):
+ return False
+ if self.rdclass != other.rdclass or \
+ self.rdtype != other.rdtype or \
+ self.covers != other.covers:
+ return False
+ return super(Rdataset, self).__eq__(other)
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+ def to_text(self, name=None, origin=None, relativize=True,
+ override_rdclass=None, **kw):
+ """Convert the rdataset into DNS master file format.
+
+ See ``dns.name.Name.choose_relativity`` for more information
+ on how *origin* and *relativize* determine the way names
+ are emitted.
+
+ Any additional keyword arguments are passed on to the rdata
+ ``to_text()`` method.
+
+ *name*, a ``dns.name.Name``. If name is not ``None``, emit RRs with
+ *name* as the owner name.
+
+ *origin*, a ``dns.name.Name`` or ``None``, the origin for relative
+ names.
+
+ *relativize*, a ``bool``. If ``True``, names will be relativized
+ to *origin*.
+ """
+
+ if name is not None:
+ name = name.choose_relativity(origin, relativize)
+ ntext = str(name)
+ pad = ' '
+ else:
+ ntext = ''
+ pad = ''
+ s = StringIO()
+ if override_rdclass is not None:
+ rdclass = override_rdclass
+ else:
+ rdclass = self.rdclass
+ if len(self) == 0:
+ #
+ # Empty rdatasets are used for the question section, and in
+ # some dynamic updates, so we don't need to print out the TTL
+ # (which is meaningless anyway).
+ #
+ s.write(u'{}{}{} {}\n'.format(ntext, pad,
+ dns.rdataclass.to_text(rdclass),
+ dns.rdatatype.to_text(self.rdtype)))
+ else:
+ for rd in self:
+ s.write(u'%s%s%d %s %s %s\n' %
+ (ntext, pad, self.ttl, dns.rdataclass.to_text(rdclass),
+ dns.rdatatype.to_text(self.rdtype),
+ rd.to_text(origin=origin, relativize=relativize,
+ **kw)))
+ #
+ # We strip off the final \n for the caller's convenience in printing
+ #
+ return s.getvalue()[:-1]
+
+ def to_wire(self, name, file, compress=None, origin=None,
+ override_rdclass=None, want_shuffle=True):
+ """Convert the rdataset to wire format.
+
+ *name*, a ``dns.name.Name`` is the owner name to use.
+
+ *file* is the file where the name is emitted (typically a
+ BytesIO file).
+
+ *compress*, a ``dict``, is the compression table to use. If
+ ``None`` (the default), names will not be compressed.
+
+ *origin* is a ``dns.name.Name`` or ``None``. If the name is
+ relative and origin is not ``None``, then *origin* will be appended
+ to it.
+
+ *override_rdclass*, an ``int``, is used as the class instead of the
+ class of the rdataset. This is useful when rendering rdatasets
+ associated with dynamic updates.
+
+ *want_shuffle*, a ``bool``. If ``True``, then the order of the
+ Rdatas within the Rdataset will be shuffled before rendering.
+
+ Returns an ``int``, the number of records emitted.
+ """
+
+ if override_rdclass is not None:
+ rdclass = override_rdclass
+ want_shuffle = False
+ else:
+ rdclass = self.rdclass
+ file.seek(0, 2)
+ if len(self) == 0:
+ name.to_wire(file, compress, origin)
+ stuff = struct.pack("!HHIH", self.rdtype, rdclass, 0, 0)
+ file.write(stuff)
+ return 1
+ else:
+ if want_shuffle:
+ l = list(self)
+ random.shuffle(l)
+ else:
+ l = self
+ for rd in l:
+ name.to_wire(file, compress, origin)
+ stuff = struct.pack("!HHIH", self.rdtype, rdclass,
+ self.ttl, 0)
+ file.write(stuff)
+ start = file.tell()
+ rd.to_wire(file, compress, origin)
+ end = file.tell()
+ assert end - start < 65536
+ file.seek(start - 2)
+ stuff = struct.pack("!H", end - start)
+ file.write(stuff)
+ file.seek(0, 2)
+ return len(self)
+
+ def match(self, rdclass, rdtype, covers):
+ """Returns ``True`` if this rdataset matches the specified class,
+ type, and covers.
+ """
+ if self.rdclass == rdclass and \
+ self.rdtype == rdtype and \
+ self.covers == covers:
+ return True
+ return False
+
+
+def from_text_list(rdclass, rdtype, ttl, text_rdatas):
+ """Create an rdataset with the specified class, type, and TTL, and with
+ the specified list of rdatas in text format.
+
+ Returns a ``dns.rdataset.Rdataset`` object.
+ """
+
+ if isinstance(rdclass, string_types):
+ rdclass = dns.rdataclass.from_text(rdclass)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ r = Rdataset(rdclass, rdtype)
+ r.update_ttl(ttl)
+ for t in text_rdatas:
+ rd = dns.rdata.from_text(r.rdclass, r.rdtype, t)
+ r.add(rd)
+ return r
+
+
+def from_text(rdclass, rdtype, ttl, *text_rdatas):
+ """Create an rdataset with the specified class, type, and TTL, and with
+ the specified rdatas in text format.
+
+ Returns a ``dns.rdataset.Rdataset`` object.
+ """
+
+ return from_text_list(rdclass, rdtype, ttl, text_rdatas)
+
+
+def from_rdata_list(ttl, rdatas):
+ """Create an rdataset with the specified TTL, and with
+ the specified list of rdata objects.
+
+ Returns a ``dns.rdataset.Rdataset`` object.
+ """
+
+ if len(rdatas) == 0:
+ raise ValueError("rdata list must not be empty")
+ r = None
+ for rd in rdatas:
+ if r is None:
+ r = Rdataset(rd.rdclass, rd.rdtype)
+ r.update_ttl(ttl)
+ r.add(rd)
+ return r
+
+
+def from_rdata(ttl, *rdatas):
+ """Create an rdataset with the specified TTL, and with
+ the specified rdata objects.
+
+ Returns a ``dns.rdataset.Rdataset`` object.
+ """
+
+ return from_rdata_list(ttl, rdatas)
diff --git a/openpype/vendor/python/python_2/dns/rdatatype.py b/openpype/vendor/python/python_2/dns/rdatatype.py
new file mode 100644
index 0000000000..b247bc9c42
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdatatype.py
@@ -0,0 +1,287 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Rdata Types."""
+
+import re
+
+import dns.exception
+
+NONE = 0
+A = 1
+NS = 2
+MD = 3
+MF = 4
+CNAME = 5
+SOA = 6
+MB = 7
+MG = 8
+MR = 9
+NULL = 10
+WKS = 11
+PTR = 12
+HINFO = 13
+MINFO = 14
+MX = 15
+TXT = 16
+RP = 17
+AFSDB = 18
+X25 = 19
+ISDN = 20
+RT = 21
+NSAP = 22
+NSAP_PTR = 23
+SIG = 24
+KEY = 25
+PX = 26
+GPOS = 27
+AAAA = 28
+LOC = 29
+NXT = 30
+SRV = 33
+NAPTR = 35
+KX = 36
+CERT = 37
+A6 = 38
+DNAME = 39
+OPT = 41
+APL = 42
+DS = 43
+SSHFP = 44
+IPSECKEY = 45
+RRSIG = 46
+NSEC = 47
+DNSKEY = 48
+DHCID = 49
+NSEC3 = 50
+NSEC3PARAM = 51
+TLSA = 52
+HIP = 55
+CDS = 59
+CDNSKEY = 60
+OPENPGPKEY = 61
+CSYNC = 62
+SPF = 99
+UNSPEC = 103
+EUI48 = 108
+EUI64 = 109
+TKEY = 249
+TSIG = 250
+IXFR = 251
+AXFR = 252
+MAILB = 253
+MAILA = 254
+ANY = 255
+URI = 256
+CAA = 257
+AVC = 258
+TA = 32768
+DLV = 32769
+
+_by_text = {
+ 'NONE': NONE,
+ 'A': A,
+ 'NS': NS,
+ 'MD': MD,
+ 'MF': MF,
+ 'CNAME': CNAME,
+ 'SOA': SOA,
+ 'MB': MB,
+ 'MG': MG,
+ 'MR': MR,
+ 'NULL': NULL,
+ 'WKS': WKS,
+ 'PTR': PTR,
+ 'HINFO': HINFO,
+ 'MINFO': MINFO,
+ 'MX': MX,
+ 'TXT': TXT,
+ 'RP': RP,
+ 'AFSDB': AFSDB,
+ 'X25': X25,
+ 'ISDN': ISDN,
+ 'RT': RT,
+ 'NSAP': NSAP,
+ 'NSAP-PTR': NSAP_PTR,
+ 'SIG': SIG,
+ 'KEY': KEY,
+ 'PX': PX,
+ 'GPOS': GPOS,
+ 'AAAA': AAAA,
+ 'LOC': LOC,
+ 'NXT': NXT,
+ 'SRV': SRV,
+ 'NAPTR': NAPTR,
+ 'KX': KX,
+ 'CERT': CERT,
+ 'A6': A6,
+ 'DNAME': DNAME,
+ 'OPT': OPT,
+ 'APL': APL,
+ 'DS': DS,
+ 'SSHFP': SSHFP,
+ 'IPSECKEY': IPSECKEY,
+ 'RRSIG': RRSIG,
+ 'NSEC': NSEC,
+ 'DNSKEY': DNSKEY,
+ 'DHCID': DHCID,
+ 'NSEC3': NSEC3,
+ 'NSEC3PARAM': NSEC3PARAM,
+ 'TLSA': TLSA,
+ 'HIP': HIP,
+ 'CDS': CDS,
+ 'CDNSKEY': CDNSKEY,
+ 'OPENPGPKEY': OPENPGPKEY,
+ 'CSYNC': CSYNC,
+ 'SPF': SPF,
+ 'UNSPEC': UNSPEC,
+ 'EUI48': EUI48,
+ 'EUI64': EUI64,
+ 'TKEY': TKEY,
+ 'TSIG': TSIG,
+ 'IXFR': IXFR,
+ 'AXFR': AXFR,
+ 'MAILB': MAILB,
+ 'MAILA': MAILA,
+ 'ANY': ANY,
+ 'URI': URI,
+ 'CAA': CAA,
+ 'AVC': AVC,
+ 'TA': TA,
+ 'DLV': DLV,
+}
+
+# We construct the inverse mapping programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mapping not to be true inverse.
+
+_by_value = {y: x for x, y in _by_text.items()}
+
+_metatypes = {
+ OPT: True
+}
+
+_singletons = {
+ SOA: True,
+ NXT: True,
+ DNAME: True,
+ NSEC: True,
+ CNAME: True,
+}
+
+_unknown_type_pattern = re.compile('TYPE([0-9]+)$', re.I)
+
+
+class UnknownRdatatype(dns.exception.DNSException):
+ """DNS resource record type is unknown."""
+
+
+def from_text(text):
+ """Convert text into a DNS rdata type value.
+
+ The input text can be a defined DNS RR type mnemonic or
+ instance of the DNS generic type syntax.
+
+ For example, "NS" and "TYPE2" will both result in a value of 2.
+
+ Raises ``dns.rdatatype.UnknownRdatatype`` if the type is unknown.
+
+ Raises ``ValueError`` if the rdata type value is not >= 0 and <= 65535.
+
+ Returns an ``int``.
+ """
+
+ value = _by_text.get(text.upper())
+ if value is None:
+ match = _unknown_type_pattern.match(text)
+ if match is None:
+ raise UnknownRdatatype
+ value = int(match.group(1))
+ if value < 0 or value > 65535:
+ raise ValueError("type must be between >= 0 and <= 65535")
+ return value
+
+
+def to_text(value):
+ """Convert a DNS rdata type value to text.
+
+ If the value has a known mnemonic, it will be used, otherwise the
+ DNS generic type syntax will be used.
+
+ Raises ``ValueError`` if the rdata type value is not >= 0 and <= 65535.
+
+ Returns a ``str``.
+ """
+
+ if value < 0 or value > 65535:
+ raise ValueError("type must be between >= 0 and <= 65535")
+ text = _by_value.get(value)
+ if text is None:
+ text = 'TYPE' + repr(value)
+ return text
+
+
+def is_metatype(rdtype):
+ """True if the specified type is a metatype.
+
+ *rdtype* is an ``int``.
+
+ The currently defined metatypes are TKEY, TSIG, IXFR, AXFR, MAILA,
+ MAILB, ANY, and OPT.
+
+ Returns a ``bool``.
+ """
+
+ if rdtype >= TKEY and rdtype <= ANY or rdtype in _metatypes:
+ return True
+ return False
+
+
+def is_singleton(rdtype):
+ """Is the specified type a singleton type?
+
+ Singleton types can only have a single rdata in an rdataset, or a single
+ RR in an RRset.
+
+ The currently defined singleton types are CNAME, DNAME, NSEC, NXT, and
+ SOA.
+
+ *rdtype* is an ``int``.
+
+ Returns a ``bool``.
+ """
+
+ if rdtype in _singletons:
+ return True
+ return False
+
+
+def register_type(rdtype, rdtype_text, is_singleton=False): # pylint: disable=redefined-outer-name
+ """Dynamically register an rdatatype.
+
+ *rdtype*, an ``int``, the rdatatype to register.
+
+ *rdtype_text*, a ``text``, the textual form of the rdatatype.
+
+ *is_singleton*, a ``bool``, indicating if the type is a singleton (i.e.
+ RRsets of the type can have only one member.)
+ """
+
+ _by_text[rdtype_text] = rdtype
+ _by_value[rdtype] = rdtype_text
+ if is_singleton:
+ _singletons[rdtype] = True
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/AFSDB.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/AFSDB.py
new file mode 100644
index 0000000000..c6a700cf56
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/AFSDB.py
@@ -0,0 +1,55 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.mxbase
+
+
+class AFSDB(dns.rdtypes.mxbase.UncompressedDowncasingMX):
+
+ """AFSDB record
+
+ @ivar subtype: the subtype value
+ @type subtype: int
+ @ivar hostname: the hostname name
+ @type hostname: dns.name.Name object"""
+
+ # Use the property mechanism to make "subtype" an alias for the
+ # "preference" attribute, and "hostname" an alias for the "exchange"
+ # attribute.
+ #
+ # This lets us inherit the UncompressedMX implementation but lets
+ # the caller use appropriate attribute names for the rdata type.
+ #
+ # We probably lose some performance vs. a cut-and-paste
+ # implementation, but this way we don't copy code, and that's
+ # good.
+
+ def get_subtype(self):
+ return self.preference
+
+ def set_subtype(self, subtype):
+ self.preference = subtype
+
+ subtype = property(get_subtype, set_subtype)
+
+ def get_hostname(self):
+ return self.exchange
+
+ def set_hostname(self, hostname):
+ self.exchange = hostname
+
+ hostname = property(get_hostname, set_hostname)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/AVC.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/AVC.py
new file mode 100644
index 0000000000..7f340b39d2
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/AVC.py
@@ -0,0 +1,25 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2016 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.txtbase
+
+
+class AVC(dns.rdtypes.txtbase.TXTBase):
+
+ """AVC record
+
+ @see: U{http://www.iana.org/assignments/dns-parameters/AVC/avc-completed-template}"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/CAA.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CAA.py
new file mode 100644
index 0000000000..0acf201ab1
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CAA.py
@@ -0,0 +1,75 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+
+
+class CAA(dns.rdata.Rdata):
+
+ """CAA (Certification Authority Authorization) record
+
+ @ivar flags: the flags
+ @type flags: int
+ @ivar tag: the tag
+ @type tag: string
+ @ivar value: the value
+ @type value: string
+ @see: RFC 6844"""
+
+ __slots__ = ['flags', 'tag', 'value']
+
+ def __init__(self, rdclass, rdtype, flags, tag, value):
+ super(CAA, self).__init__(rdclass, rdtype)
+ self.flags = flags
+ self.tag = tag
+ self.value = value
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%u %s "%s"' % (self.flags,
+ dns.rdata._escapify(self.tag),
+ dns.rdata._escapify(self.value))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ flags = tok.get_uint8()
+ tag = tok.get_string().encode()
+ if len(tag) > 255:
+ raise dns.exception.SyntaxError("tag too long")
+ if not tag.isalnum():
+ raise dns.exception.SyntaxError("tag is not alphanumeric")
+ value = tok.get_string().encode()
+ return cls(rdclass, rdtype, flags, tag, value)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(struct.pack('!B', self.flags))
+ l = len(self.tag)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.tag)
+ file.write(self.value)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (flags, l) = struct.unpack('!BB', wire[current: current + 2])
+ current += 2
+ tag = wire[current: current + l]
+ value = wire[current + l:current + rdlen - 2]
+ return cls(rdclass, rdtype, flags, tag, value)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/CDNSKEY.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CDNSKEY.py
new file mode 100644
index 0000000000..653ae1be16
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CDNSKEY.py
@@ -0,0 +1,27 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.dnskeybase
+from dns.rdtypes.dnskeybase import flags_to_text_set, flags_from_text_set
+
+
+__all__ = ['flags_to_text_set', 'flags_from_text_set']
+
+
+class CDNSKEY(dns.rdtypes.dnskeybase.DNSKEYBase):
+
+ """CDNSKEY record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/CDS.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CDS.py
new file mode 100644
index 0000000000..a63041dd79
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CDS.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.dsbase
+
+
+class CDS(dns.rdtypes.dsbase.DSBase):
+
+ """CDS record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/CERT.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CERT.py
new file mode 100644
index 0000000000..eea27b52c3
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CERT.py
@@ -0,0 +1,123 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import base64
+
+import dns.exception
+import dns.dnssec
+import dns.rdata
+import dns.tokenizer
+
+_ctype_by_value = {
+ 1: 'PKIX',
+ 2: 'SPKI',
+ 3: 'PGP',
+ 253: 'URI',
+ 254: 'OID',
+}
+
+_ctype_by_name = {
+ 'PKIX': 1,
+ 'SPKI': 2,
+ 'PGP': 3,
+ 'URI': 253,
+ 'OID': 254,
+}
+
+
+def _ctype_from_text(what):
+ v = _ctype_by_name.get(what)
+ if v is not None:
+ return v
+ return int(what)
+
+
+def _ctype_to_text(what):
+ v = _ctype_by_value.get(what)
+ if v is not None:
+ return v
+ return str(what)
+
+
+class CERT(dns.rdata.Rdata):
+
+ """CERT record
+
+ @ivar certificate_type: certificate type
+ @type certificate_type: int
+ @ivar key_tag: key tag
+ @type key_tag: int
+ @ivar algorithm: algorithm
+ @type algorithm: int
+ @ivar certificate: the certificate or CRL
+ @type certificate: string
+ @see: RFC 2538"""
+
+ __slots__ = ['certificate_type', 'key_tag', 'algorithm', 'certificate']
+
+ def __init__(self, rdclass, rdtype, certificate_type, key_tag, algorithm,
+ certificate):
+ super(CERT, self).__init__(rdclass, rdtype)
+ self.certificate_type = certificate_type
+ self.key_tag = key_tag
+ self.algorithm = algorithm
+ self.certificate = certificate
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ certificate_type = _ctype_to_text(self.certificate_type)
+ return "%s %d %s %s" % (certificate_type, self.key_tag,
+ dns.dnssec.algorithm_to_text(self.algorithm),
+ dns.rdata._base64ify(self.certificate))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ certificate_type = _ctype_from_text(tok.get_string())
+ key_tag = tok.get_uint16()
+ algorithm = dns.dnssec.algorithm_from_text(tok.get_string())
+ if algorithm < 0 or algorithm > 255:
+ raise dns.exception.SyntaxError("bad algorithm type")
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ b64 = b''.join(chunks)
+ certificate = base64.b64decode(b64)
+ return cls(rdclass, rdtype, certificate_type, key_tag,
+ algorithm, certificate)
+
+ def to_wire(self, file, compress=None, origin=None):
+ prefix = struct.pack("!HHB", self.certificate_type, self.key_tag,
+ self.algorithm)
+ file.write(prefix)
+ file.write(self.certificate)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ prefix = wire[current: current + 5].unwrap()
+ current += 5
+ rdlen -= 5
+ if rdlen < 0:
+ raise dns.exception.FormError
+ (certificate_type, key_tag, algorithm) = struct.unpack("!HHB", prefix)
+ certificate = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, certificate_type, key_tag, algorithm,
+ certificate)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/CNAME.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CNAME.py
new file mode 100644
index 0000000000..11d42aa7fd
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CNAME.py
@@ -0,0 +1,27 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.nsbase
+
+
+class CNAME(dns.rdtypes.nsbase.NSBase):
+
+ """CNAME record
+
+ Note: although CNAME is officially a singleton type, dnspython allows
+ non-singleton CNAME rdatasets because such sets have been commonly
+ used by BIND and other nameservers for load balancing."""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/CSYNC.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CSYNC.py
new file mode 100644
index 0000000000..06292fb28c
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/CSYNC.py
@@ -0,0 +1,126 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011, 2016 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.rdatatype
+import dns.name
+from dns._compat import xrange
+
+class CSYNC(dns.rdata.Rdata):
+
+ """CSYNC record
+
+ @ivar serial: the SOA serial number
+ @type serial: int
+ @ivar flags: the CSYNC flags
+ @type flags: int
+ @ivar windows: the windowed bitmap list
+ @type windows: list of (window number, string) tuples"""
+
+ __slots__ = ['serial', 'flags', 'windows']
+
+ def __init__(self, rdclass, rdtype, serial, flags, windows):
+ super(CSYNC, self).__init__(rdclass, rdtype)
+ self.serial = serial
+ self.flags = flags
+ self.windows = windows
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ text = ''
+ for (window, bitmap) in self.windows:
+ bits = []
+ for i in xrange(0, len(bitmap)):
+ byte = bitmap[i]
+ for j in xrange(0, 8):
+ if byte & (0x80 >> j):
+ bits.append(dns.rdatatype.to_text(window * 256 +
+ i * 8 + j))
+ text += (' ' + ' '.join(bits))
+ return '%d %d%s' % (self.serial, self.flags, text)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ serial = tok.get_uint32()
+ flags = tok.get_uint16()
+ rdtypes = []
+ while 1:
+ token = tok.get().unescape()
+ if token.is_eol_or_eof():
+ break
+ nrdtype = dns.rdatatype.from_text(token.value)
+ if nrdtype == 0:
+ raise dns.exception.SyntaxError("CSYNC with bit 0")
+ if nrdtype > 65535:
+ raise dns.exception.SyntaxError("CSYNC with bit > 65535")
+ rdtypes.append(nrdtype)
+ rdtypes.sort()
+ window = 0
+ octets = 0
+ prior_rdtype = 0
+ bitmap = bytearray(b'\0' * 32)
+ windows = []
+ for nrdtype in rdtypes:
+ if nrdtype == prior_rdtype:
+ continue
+ prior_rdtype = nrdtype
+ new_window = nrdtype // 256
+ if new_window != window:
+ windows.append((window, bitmap[0:octets]))
+ bitmap = bytearray(b'\0' * 32)
+ window = new_window
+ offset = nrdtype % 256
+ byte = offset // 8
+ bit = offset % 8
+ octets = byte + 1
+ bitmap[byte] = bitmap[byte] | (0x80 >> bit)
+
+ windows.append((window, bitmap[0:octets]))
+ return cls(rdclass, rdtype, serial, flags, windows)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(struct.pack('!IH', self.serial, self.flags))
+ for (window, bitmap) in self.windows:
+ file.write(struct.pack('!BB', window, len(bitmap)))
+ file.write(bitmap)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ if rdlen < 6:
+ raise dns.exception.FormError("CSYNC too short")
+ (serial, flags) = struct.unpack("!IH", wire[current: current + 6])
+ current += 6
+ rdlen -= 6
+ windows = []
+ while rdlen > 0:
+ if rdlen < 3:
+ raise dns.exception.FormError("CSYNC too short")
+ window = wire[current]
+ octets = wire[current + 1]
+ if octets == 0 or octets > 32:
+ raise dns.exception.FormError("bad CSYNC octets")
+ current += 2
+ rdlen -= 2
+ if rdlen < octets:
+ raise dns.exception.FormError("bad CSYNC bitmap length")
+ bitmap = bytearray(wire[current: current + octets].unwrap())
+ current += octets
+ rdlen -= octets
+ windows.append((window, bitmap))
+ return cls(rdclass, rdtype, serial, flags, windows)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/DLV.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DLV.py
new file mode 100644
index 0000000000..1635212583
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DLV.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.dsbase
+
+
+class DLV(dns.rdtypes.dsbase.DSBase):
+
+ """DLV record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/DNAME.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DNAME.py
new file mode 100644
index 0000000000..2499283cfa
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DNAME.py
@@ -0,0 +1,26 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.nsbase
+
+
+class DNAME(dns.rdtypes.nsbase.UncompressedNS):
+
+ """DNAME record"""
+
+ def to_digestable(self, origin=None):
+ return self.target.to_digestable(origin)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/DNSKEY.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DNSKEY.py
new file mode 100644
index 0000000000..e36f7bc5b1
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DNSKEY.py
@@ -0,0 +1,27 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.dnskeybase
+from dns.rdtypes.dnskeybase import flags_to_text_set, flags_from_text_set
+
+
+__all__ = ['flags_to_text_set', 'flags_from_text_set']
+
+
+class DNSKEY(dns.rdtypes.dnskeybase.DNSKEYBase):
+
+ """DNSKEY record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/DS.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DS.py
new file mode 100644
index 0000000000..7d457b2281
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/DS.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.dsbase
+
+
+class DS(dns.rdtypes.dsbase.DSBase):
+
+ """DS record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/EUI48.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/EUI48.py
new file mode 100644
index 0000000000..aa260e205d
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/EUI48.py
@@ -0,0 +1,29 @@
+# Copyright (C) 2015 Red Hat, Inc.
+# Author: Petr Spacek
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED 'AS IS' AND RED HAT DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.euibase
+
+
+class EUI48(dns.rdtypes.euibase.EUIBase):
+
+ """EUI48 record
+
+ @ivar fingerprint: 48-bit Extended Unique Identifier (EUI-48)
+ @type fingerprint: string
+ @see: rfc7043.txt"""
+
+ byte_len = 6 # 0123456789ab (in hex)
+ text_len = byte_len * 3 - 1 # 01-23-45-67-89-ab
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/EUI64.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/EUI64.py
new file mode 100644
index 0000000000..5eba350d8f
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/EUI64.py
@@ -0,0 +1,29 @@
+# Copyright (C) 2015 Red Hat, Inc.
+# Author: Petr Spacek
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED 'AS IS' AND RED HAT DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.euibase
+
+
+class EUI64(dns.rdtypes.euibase.EUIBase):
+
+ """EUI64 record
+
+ @ivar fingerprint: 64-bit Extended Unique Identifier (EUI-64)
+ @type fingerprint: string
+ @see: rfc7043.txt"""
+
+ byte_len = 8 # 0123456789abcdef (in hex)
+ text_len = byte_len * 3 - 1 # 01-23-45-67-89-ab-cd-ef
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/GPOS.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/GPOS.py
new file mode 100644
index 0000000000..422822f03b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/GPOS.py
@@ -0,0 +1,162 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+from dns._compat import long, text_type
+
+
+def _validate_float_string(what):
+ if what[0] == b'-'[0] or what[0] == b'+'[0]:
+ what = what[1:]
+ if what.isdigit():
+ return
+ (left, right) = what.split(b'.')
+ if left == b'' and right == b'':
+ raise dns.exception.FormError
+ if not left == b'' and not left.decode().isdigit():
+ raise dns.exception.FormError
+ if not right == b'' and not right.decode().isdigit():
+ raise dns.exception.FormError
+
+
+def _sanitize(value):
+ if isinstance(value, text_type):
+ return value.encode()
+ return value
+
+
+class GPOS(dns.rdata.Rdata):
+
+ """GPOS record
+
+ @ivar latitude: latitude
+ @type latitude: string
+ @ivar longitude: longitude
+ @type longitude: string
+ @ivar altitude: altitude
+ @type altitude: string
+ @see: RFC 1712"""
+
+ __slots__ = ['latitude', 'longitude', 'altitude']
+
+ def __init__(self, rdclass, rdtype, latitude, longitude, altitude):
+ super(GPOS, self).__init__(rdclass, rdtype)
+ if isinstance(latitude, float) or \
+ isinstance(latitude, int) or \
+ isinstance(latitude, long):
+ latitude = str(latitude)
+ if isinstance(longitude, float) or \
+ isinstance(longitude, int) or \
+ isinstance(longitude, long):
+ longitude = str(longitude)
+ if isinstance(altitude, float) or \
+ isinstance(altitude, int) or \
+ isinstance(altitude, long):
+ altitude = str(altitude)
+ latitude = _sanitize(latitude)
+ longitude = _sanitize(longitude)
+ altitude = _sanitize(altitude)
+ _validate_float_string(latitude)
+ _validate_float_string(longitude)
+ _validate_float_string(altitude)
+ self.latitude = latitude
+ self.longitude = longitude
+ self.altitude = altitude
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '{} {} {}'.format(self.latitude.decode(),
+ self.longitude.decode(),
+ self.altitude.decode())
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ latitude = tok.get_string()
+ longitude = tok.get_string()
+ altitude = tok.get_string()
+ tok.get_eol()
+ return cls(rdclass, rdtype, latitude, longitude, altitude)
+
+ def to_wire(self, file, compress=None, origin=None):
+ l = len(self.latitude)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.latitude)
+ l = len(self.longitude)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.longitude)
+ l = len(self.altitude)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.altitude)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l > rdlen:
+ raise dns.exception.FormError
+ latitude = wire[current: current + l].unwrap()
+ current += l
+ rdlen -= l
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l > rdlen:
+ raise dns.exception.FormError
+ longitude = wire[current: current + l].unwrap()
+ current += l
+ rdlen -= l
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l != rdlen:
+ raise dns.exception.FormError
+ altitude = wire[current: current + l].unwrap()
+ return cls(rdclass, rdtype, latitude, longitude, altitude)
+
+ def _get_float_latitude(self):
+ return float(self.latitude)
+
+ def _set_float_latitude(self, value):
+ self.latitude = str(value)
+
+ float_latitude = property(_get_float_latitude, _set_float_latitude,
+ doc="latitude as a floating point value")
+
+ def _get_float_longitude(self):
+ return float(self.longitude)
+
+ def _set_float_longitude(self, value):
+ self.longitude = str(value)
+
+ float_longitude = property(_get_float_longitude, _set_float_longitude,
+ doc="longitude as a floating point value")
+
+ def _get_float_altitude(self):
+ return float(self.altitude)
+
+ def _set_float_altitude(self, value):
+ self.altitude = str(value)
+
+ float_altitude = property(_get_float_altitude, _set_float_altitude,
+ doc="altitude as a floating point value")
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/HINFO.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/HINFO.py
new file mode 100644
index 0000000000..e4e0b34a49
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/HINFO.py
@@ -0,0 +1,86 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+from dns._compat import text_type
+
+
+class HINFO(dns.rdata.Rdata):
+
+ """HINFO record
+
+ @ivar cpu: the CPU type
+ @type cpu: string
+ @ivar os: the OS type
+ @type os: string
+ @see: RFC 1035"""
+
+ __slots__ = ['cpu', 'os']
+
+ def __init__(self, rdclass, rdtype, cpu, os):
+ super(HINFO, self).__init__(rdclass, rdtype)
+ if isinstance(cpu, text_type):
+ self.cpu = cpu.encode()
+ else:
+ self.cpu = cpu
+ if isinstance(os, text_type):
+ self.os = os.encode()
+ else:
+ self.os = os
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '"{}" "{}"'.format(dns.rdata._escapify(self.cpu),
+ dns.rdata._escapify(self.os))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ cpu = tok.get_string()
+ os = tok.get_string()
+ tok.get_eol()
+ return cls(rdclass, rdtype, cpu, os)
+
+ def to_wire(self, file, compress=None, origin=None):
+ l = len(self.cpu)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.cpu)
+ l = len(self.os)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.os)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l > rdlen:
+ raise dns.exception.FormError
+ cpu = wire[current:current + l].unwrap()
+ current += l
+ rdlen -= l
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l != rdlen:
+ raise dns.exception.FormError
+ os = wire[current: current + l].unwrap()
+ return cls(rdclass, rdtype, cpu, os)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/HIP.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/HIP.py
new file mode 100644
index 0000000000..7c876b2d2f
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/HIP.py
@@ -0,0 +1,115 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2010, 2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import base64
+import binascii
+
+import dns.exception
+import dns.rdata
+import dns.rdatatype
+
+
+class HIP(dns.rdata.Rdata):
+
+ """HIP record
+
+ @ivar hit: the host identity tag
+ @type hit: string
+ @ivar algorithm: the public key cryptographic algorithm
+ @type algorithm: int
+ @ivar key: the public key
+ @type key: string
+ @ivar servers: the rendezvous servers
+ @type servers: list of dns.name.Name objects
+ @see: RFC 5205"""
+
+ __slots__ = ['hit', 'algorithm', 'key', 'servers']
+
+ def __init__(self, rdclass, rdtype, hit, algorithm, key, servers):
+ super(HIP, self).__init__(rdclass, rdtype)
+ self.hit = hit
+ self.algorithm = algorithm
+ self.key = key
+ self.servers = servers
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ hit = binascii.hexlify(self.hit).decode()
+ key = base64.b64encode(self.key).replace(b'\n', b'').decode()
+ text = u''
+ servers = []
+ for server in self.servers:
+ servers.append(server.choose_relativity(origin, relativize))
+ if len(servers) > 0:
+ text += (u' ' + u' '.join((x.to_unicode() for x in servers)))
+ return u'%u %s %s%s' % (self.algorithm, hit, key, text)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ algorithm = tok.get_uint8()
+ hit = binascii.unhexlify(tok.get_string().encode())
+ if len(hit) > 255:
+ raise dns.exception.SyntaxError("HIT too long")
+ key = base64.b64decode(tok.get_string().encode())
+ servers = []
+ while 1:
+ token = tok.get()
+ if token.is_eol_or_eof():
+ break
+ server = dns.name.from_text(token.value, origin)
+ server.choose_relativity(origin, relativize)
+ servers.append(server)
+ return cls(rdclass, rdtype, hit, algorithm, key, servers)
+
+ def to_wire(self, file, compress=None, origin=None):
+ lh = len(self.hit)
+ lk = len(self.key)
+ file.write(struct.pack("!BBH", lh, self.algorithm, lk))
+ file.write(self.hit)
+ file.write(self.key)
+ for server in self.servers:
+ server.to_wire(file, None, origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (lh, algorithm, lk) = struct.unpack('!BBH',
+ wire[current: current + 4])
+ current += 4
+ rdlen -= 4
+ hit = wire[current: current + lh].unwrap()
+ current += lh
+ rdlen -= lh
+ key = wire[current: current + lk].unwrap()
+ current += lk
+ rdlen -= lk
+ servers = []
+ while rdlen > 0:
+ (server, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ current += cused
+ rdlen -= cused
+ if origin is not None:
+ server = server.relativize(origin)
+ servers.append(server)
+ return cls(rdclass, rdtype, hit, algorithm, key, servers)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ servers = []
+ for server in self.servers:
+ server = server.choose_relativity(origin, relativize)
+ servers.append(server)
+ self.servers = servers
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/ISDN.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/ISDN.py
new file mode 100644
index 0000000000..f5f5f8b9ea
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/ISDN.py
@@ -0,0 +1,99 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+from dns._compat import text_type
+
+
+class ISDN(dns.rdata.Rdata):
+
+ """ISDN record
+
+ @ivar address: the ISDN address
+ @type address: string
+ @ivar subaddress: the ISDN subaddress (or '' if not present)
+ @type subaddress: string
+ @see: RFC 1183"""
+
+ __slots__ = ['address', 'subaddress']
+
+ def __init__(self, rdclass, rdtype, address, subaddress):
+ super(ISDN, self).__init__(rdclass, rdtype)
+ if isinstance(address, text_type):
+ self.address = address.encode()
+ else:
+ self.address = address
+ if isinstance(address, text_type):
+ self.subaddress = subaddress.encode()
+ else:
+ self.subaddress = subaddress
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ if self.subaddress:
+ return '"{}" "{}"'.format(dns.rdata._escapify(self.address),
+ dns.rdata._escapify(self.subaddress))
+ else:
+ return '"%s"' % dns.rdata._escapify(self.address)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ address = tok.get_string()
+ t = tok.get()
+ if not t.is_eol_or_eof():
+ tok.unget(t)
+ subaddress = tok.get_string()
+ else:
+ tok.unget(t)
+ subaddress = ''
+ tok.get_eol()
+ return cls(rdclass, rdtype, address, subaddress)
+
+ def to_wire(self, file, compress=None, origin=None):
+ l = len(self.address)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.address)
+ l = len(self.subaddress)
+ if l > 0:
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.subaddress)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l > rdlen:
+ raise dns.exception.FormError
+ address = wire[current: current + l].unwrap()
+ current += l
+ rdlen -= l
+ if rdlen > 0:
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l != rdlen:
+ raise dns.exception.FormError
+ subaddress = wire[current: current + l].unwrap()
+ else:
+ subaddress = ''
+ return cls(rdclass, rdtype, address, subaddress)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/LOC.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/LOC.py
new file mode 100644
index 0000000000..da9bb03a95
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/LOC.py
@@ -0,0 +1,327 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+from __future__ import division
+
+import struct
+
+import dns.exception
+import dns.rdata
+from dns._compat import long, xrange, round_py2_compat
+
+
+_pows = tuple(long(10**i) for i in range(0, 11))
+
+# default values are in centimeters
+_default_size = 100.0
+_default_hprec = 1000000.0
+_default_vprec = 1000.0
+
+
+def _exponent_of(what, desc):
+ if what == 0:
+ return 0
+ exp = None
+ for i in xrange(len(_pows)):
+ if what // _pows[i] == long(0):
+ exp = i - 1
+ break
+ if exp is None or exp < 0:
+ raise dns.exception.SyntaxError("%s value out of bounds" % desc)
+ return exp
+
+
+def _float_to_tuple(what):
+ if what < 0:
+ sign = -1
+ what *= -1
+ else:
+ sign = 1
+ what = round_py2_compat(what * 3600000)
+ degrees = int(what // 3600000)
+ what -= degrees * 3600000
+ minutes = int(what // 60000)
+ what -= minutes * 60000
+ seconds = int(what // 1000)
+ what -= int(seconds * 1000)
+ what = int(what)
+ return (degrees, minutes, seconds, what, sign)
+
+
+def _tuple_to_float(what):
+ value = float(what[0])
+ value += float(what[1]) / 60.0
+ value += float(what[2]) / 3600.0
+ value += float(what[3]) / 3600000.0
+ return float(what[4]) * value
+
+
+def _encode_size(what, desc):
+ what = long(what)
+ exponent = _exponent_of(what, desc) & 0xF
+ base = what // pow(10, exponent) & 0xF
+ return base * 16 + exponent
+
+
+def _decode_size(what, desc):
+ exponent = what & 0x0F
+ if exponent > 9:
+ raise dns.exception.SyntaxError("bad %s exponent" % desc)
+ base = (what & 0xF0) >> 4
+ if base > 9:
+ raise dns.exception.SyntaxError("bad %s base" % desc)
+ return long(base) * pow(10, exponent)
+
+
+class LOC(dns.rdata.Rdata):
+
+ """LOC record
+
+ @ivar latitude: latitude
+ @type latitude: (int, int, int, int, sign) tuple specifying the degrees, minutes,
+ seconds, milliseconds, and sign of the coordinate.
+ @ivar longitude: longitude
+ @type longitude: (int, int, int, int, sign) tuple specifying the degrees,
+ minutes, seconds, milliseconds, and sign of the coordinate.
+ @ivar altitude: altitude
+ @type altitude: float
+ @ivar size: size of the sphere
+ @type size: float
+ @ivar horizontal_precision: horizontal precision
+ @type horizontal_precision: float
+ @ivar vertical_precision: vertical precision
+ @type vertical_precision: float
+ @see: RFC 1876"""
+
+ __slots__ = ['latitude', 'longitude', 'altitude', 'size',
+ 'horizontal_precision', 'vertical_precision']
+
+ def __init__(self, rdclass, rdtype, latitude, longitude, altitude,
+ size=_default_size, hprec=_default_hprec,
+ vprec=_default_vprec):
+ """Initialize a LOC record instance.
+
+ The parameters I{latitude} and I{longitude} may be either a 4-tuple
+ of integers specifying (degrees, minutes, seconds, milliseconds),
+ or they may be floating point values specifying the number of
+ degrees. The other parameters are floats. Size, horizontal precision,
+ and vertical precision are specified in centimeters."""
+
+ super(LOC, self).__init__(rdclass, rdtype)
+ if isinstance(latitude, int) or isinstance(latitude, long):
+ latitude = float(latitude)
+ if isinstance(latitude, float):
+ latitude = _float_to_tuple(latitude)
+ self.latitude = latitude
+ if isinstance(longitude, int) or isinstance(longitude, long):
+ longitude = float(longitude)
+ if isinstance(longitude, float):
+ longitude = _float_to_tuple(longitude)
+ self.longitude = longitude
+ self.altitude = float(altitude)
+ self.size = float(size)
+ self.horizontal_precision = float(hprec)
+ self.vertical_precision = float(vprec)
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ if self.latitude[4] > 0:
+ lat_hemisphere = 'N'
+ else:
+ lat_hemisphere = 'S'
+ if self.longitude[4] > 0:
+ long_hemisphere = 'E'
+ else:
+ long_hemisphere = 'W'
+ text = "%d %d %d.%03d %s %d %d %d.%03d %s %0.2fm" % (
+ self.latitude[0], self.latitude[1],
+ self.latitude[2], self.latitude[3], lat_hemisphere,
+ self.longitude[0], self.longitude[1], self.longitude[2],
+ self.longitude[3], long_hemisphere,
+ self.altitude / 100.0
+ )
+
+ # do not print default values
+ if self.size != _default_size or \
+ self.horizontal_precision != _default_hprec or \
+ self.vertical_precision != _default_vprec:
+ text += " {:0.2f}m {:0.2f}m {:0.2f}m".format(
+ self.size / 100.0, self.horizontal_precision / 100.0,
+ self.vertical_precision / 100.0
+ )
+ return text
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ latitude = [0, 0, 0, 0, 1]
+ longitude = [0, 0, 0, 0, 1]
+ size = _default_size
+ hprec = _default_hprec
+ vprec = _default_vprec
+
+ latitude[0] = tok.get_int()
+ t = tok.get_string()
+ if t.isdigit():
+ latitude[1] = int(t)
+ t = tok.get_string()
+ if '.' in t:
+ (seconds, milliseconds) = t.split('.')
+ if not seconds.isdigit():
+ raise dns.exception.SyntaxError(
+ 'bad latitude seconds value')
+ latitude[2] = int(seconds)
+ if latitude[2] >= 60:
+ raise dns.exception.SyntaxError('latitude seconds >= 60')
+ l = len(milliseconds)
+ if l == 0 or l > 3 or not milliseconds.isdigit():
+ raise dns.exception.SyntaxError(
+ 'bad latitude milliseconds value')
+ if l == 1:
+ m = 100
+ elif l == 2:
+ m = 10
+ else:
+ m = 1
+ latitude[3] = m * int(milliseconds)
+ t = tok.get_string()
+ elif t.isdigit():
+ latitude[2] = int(t)
+ t = tok.get_string()
+ if t == 'S':
+ latitude[4] = -1
+ elif t != 'N':
+ raise dns.exception.SyntaxError('bad latitude hemisphere value')
+
+ longitude[0] = tok.get_int()
+ t = tok.get_string()
+ if t.isdigit():
+ longitude[1] = int(t)
+ t = tok.get_string()
+ if '.' in t:
+ (seconds, milliseconds) = t.split('.')
+ if not seconds.isdigit():
+ raise dns.exception.SyntaxError(
+ 'bad longitude seconds value')
+ longitude[2] = int(seconds)
+ if longitude[2] >= 60:
+ raise dns.exception.SyntaxError('longitude seconds >= 60')
+ l = len(milliseconds)
+ if l == 0 or l > 3 or not milliseconds.isdigit():
+ raise dns.exception.SyntaxError(
+ 'bad longitude milliseconds value')
+ if l == 1:
+ m = 100
+ elif l == 2:
+ m = 10
+ else:
+ m = 1
+ longitude[3] = m * int(milliseconds)
+ t = tok.get_string()
+ elif t.isdigit():
+ longitude[2] = int(t)
+ t = tok.get_string()
+ if t == 'W':
+ longitude[4] = -1
+ elif t != 'E':
+ raise dns.exception.SyntaxError('bad longitude hemisphere value')
+
+ t = tok.get_string()
+ if t[-1] == 'm':
+ t = t[0: -1]
+ altitude = float(t) * 100.0 # m -> cm
+
+ token = tok.get().unescape()
+ if not token.is_eol_or_eof():
+ value = token.value
+ if value[-1] == 'm':
+ value = value[0: -1]
+ size = float(value) * 100.0 # m -> cm
+ token = tok.get().unescape()
+ if not token.is_eol_or_eof():
+ value = token.value
+ if value[-1] == 'm':
+ value = value[0: -1]
+ hprec = float(value) * 100.0 # m -> cm
+ token = tok.get().unescape()
+ if not token.is_eol_or_eof():
+ value = token.value
+ if value[-1] == 'm':
+ value = value[0: -1]
+ vprec = float(value) * 100.0 # m -> cm
+ tok.get_eol()
+
+ return cls(rdclass, rdtype, latitude, longitude, altitude,
+ size, hprec, vprec)
+
+ def to_wire(self, file, compress=None, origin=None):
+ milliseconds = (self.latitude[0] * 3600000 +
+ self.latitude[1] * 60000 +
+ self.latitude[2] * 1000 +
+ self.latitude[3]) * self.latitude[4]
+ latitude = long(0x80000000) + milliseconds
+ milliseconds = (self.longitude[0] * 3600000 +
+ self.longitude[1] * 60000 +
+ self.longitude[2] * 1000 +
+ self.longitude[3]) * self.longitude[4]
+ longitude = long(0x80000000) + milliseconds
+ altitude = long(self.altitude) + long(10000000)
+ size = _encode_size(self.size, "size")
+ hprec = _encode_size(self.horizontal_precision, "horizontal precision")
+ vprec = _encode_size(self.vertical_precision, "vertical precision")
+ wire = struct.pack("!BBBBIII", 0, size, hprec, vprec, latitude,
+ longitude, altitude)
+ file.write(wire)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (version, size, hprec, vprec, latitude, longitude, altitude) = \
+ struct.unpack("!BBBBIII", wire[current: current + rdlen])
+ if latitude > long(0x80000000):
+ latitude = float(latitude - long(0x80000000)) / 3600000
+ else:
+ latitude = -1 * float(long(0x80000000) - latitude) / 3600000
+ if latitude < -90.0 or latitude > 90.0:
+ raise dns.exception.FormError("bad latitude")
+ if longitude > long(0x80000000):
+ longitude = float(longitude - long(0x80000000)) / 3600000
+ else:
+ longitude = -1 * float(long(0x80000000) - longitude) / 3600000
+ if longitude < -180.0 or longitude > 180.0:
+ raise dns.exception.FormError("bad longitude")
+ altitude = float(altitude) - 10000000.0
+ size = _decode_size(size, "size")
+ hprec = _decode_size(hprec, "horizontal precision")
+ vprec = _decode_size(vprec, "vertical precision")
+ return cls(rdclass, rdtype, latitude, longitude, altitude,
+ size, hprec, vprec)
+
+ def _get_float_latitude(self):
+ return _tuple_to_float(self.latitude)
+
+ def _set_float_latitude(self, value):
+ self.latitude = _float_to_tuple(value)
+
+ float_latitude = property(_get_float_latitude, _set_float_latitude,
+ doc="latitude as a floating point value")
+
+ def _get_float_longitude(self):
+ return _tuple_to_float(self.longitude)
+
+ def _set_float_longitude(self, value):
+ self.longitude = _float_to_tuple(value)
+
+ float_longitude = property(_get_float_longitude, _set_float_longitude,
+ doc="longitude as a floating point value")
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/MX.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/MX.py
new file mode 100644
index 0000000000..0a06494f73
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/MX.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.mxbase
+
+
+class MX(dns.rdtypes.mxbase.MXBase):
+
+ """MX record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/NS.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NS.py
new file mode 100644
index 0000000000..f9fcf637f7
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NS.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.nsbase
+
+
+class NS(dns.rdtypes.nsbase.NSBase):
+
+ """NS record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC.py
new file mode 100644
index 0000000000..4e3da7296b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC.py
@@ -0,0 +1,128 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.rdatatype
+import dns.name
+from dns._compat import xrange
+
+
+class NSEC(dns.rdata.Rdata):
+
+ """NSEC record
+
+ @ivar next: the next name
+ @type next: dns.name.Name object
+ @ivar windows: the windowed bitmap list
+ @type windows: list of (window number, string) tuples"""
+
+ __slots__ = ['next', 'windows']
+
+ def __init__(self, rdclass, rdtype, next, windows):
+ super(NSEC, self).__init__(rdclass, rdtype)
+ self.next = next
+ self.windows = windows
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ next = self.next.choose_relativity(origin, relativize)
+ text = ''
+ for (window, bitmap) in self.windows:
+ bits = []
+ for i in xrange(0, len(bitmap)):
+ byte = bitmap[i]
+ for j in xrange(0, 8):
+ if byte & (0x80 >> j):
+ bits.append(dns.rdatatype.to_text(window * 256 +
+ i * 8 + j))
+ text += (' ' + ' '.join(bits))
+ return '{}{}'.format(next, text)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ next = tok.get_name()
+ next = next.choose_relativity(origin, relativize)
+ rdtypes = []
+ while 1:
+ token = tok.get().unescape()
+ if token.is_eol_or_eof():
+ break
+ nrdtype = dns.rdatatype.from_text(token.value)
+ if nrdtype == 0:
+ raise dns.exception.SyntaxError("NSEC with bit 0")
+ if nrdtype > 65535:
+ raise dns.exception.SyntaxError("NSEC with bit > 65535")
+ rdtypes.append(nrdtype)
+ rdtypes.sort()
+ window = 0
+ octets = 0
+ prior_rdtype = 0
+ bitmap = bytearray(b'\0' * 32)
+ windows = []
+ for nrdtype in rdtypes:
+ if nrdtype == prior_rdtype:
+ continue
+ prior_rdtype = nrdtype
+ new_window = nrdtype // 256
+ if new_window != window:
+ windows.append((window, bitmap[0:octets]))
+ bitmap = bytearray(b'\0' * 32)
+ window = new_window
+ offset = nrdtype % 256
+ byte = offset // 8
+ bit = offset % 8
+ octets = byte + 1
+ bitmap[byte] = bitmap[byte] | (0x80 >> bit)
+
+ windows.append((window, bitmap[0:octets]))
+ return cls(rdclass, rdtype, next, windows)
+
+ def to_wire(self, file, compress=None, origin=None):
+ self.next.to_wire(file, None, origin)
+ for (window, bitmap) in self.windows:
+ file.write(struct.pack('!BB', window, len(bitmap)))
+ file.write(bitmap)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (next, cused) = dns.name.from_wire(wire[: current + rdlen], current)
+ current += cused
+ rdlen -= cused
+ windows = []
+ while rdlen > 0:
+ if rdlen < 3:
+ raise dns.exception.FormError("NSEC too short")
+ window = wire[current]
+ octets = wire[current + 1]
+ if octets == 0 or octets > 32:
+ raise dns.exception.FormError("bad NSEC octets")
+ current += 2
+ rdlen -= 2
+ if rdlen < octets:
+ raise dns.exception.FormError("bad NSEC bitmap length")
+ bitmap = bytearray(wire[current: current + octets].unwrap())
+ current += octets
+ rdlen -= octets
+ windows.append((window, bitmap))
+ if origin is not None:
+ next = next.relativize(origin)
+ return cls(rdclass, rdtype, next, windows)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.next = self.next.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC3.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC3.py
new file mode 100644
index 0000000000..1c281c4a4d
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC3.py
@@ -0,0 +1,196 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import base64
+import binascii
+import string
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.rdatatype
+from dns._compat import xrange, text_type, PY3
+
+# pylint: disable=deprecated-string-function
+if PY3:
+ b32_hex_to_normal = bytes.maketrans(b'0123456789ABCDEFGHIJKLMNOPQRSTUV',
+ b'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567')
+ b32_normal_to_hex = bytes.maketrans(b'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567',
+ b'0123456789ABCDEFGHIJKLMNOPQRSTUV')
+else:
+ b32_hex_to_normal = string.maketrans('0123456789ABCDEFGHIJKLMNOPQRSTUV',
+ 'ABCDEFGHIJKLMNOPQRSTUVWXYZ234567')
+ b32_normal_to_hex = string.maketrans('ABCDEFGHIJKLMNOPQRSTUVWXYZ234567',
+ '0123456789ABCDEFGHIJKLMNOPQRSTUV')
+# pylint: enable=deprecated-string-function
+
+
+# hash algorithm constants
+SHA1 = 1
+
+# flag constants
+OPTOUT = 1
+
+
+class NSEC3(dns.rdata.Rdata):
+
+ """NSEC3 record
+
+ @ivar algorithm: the hash algorithm number
+ @type algorithm: int
+ @ivar flags: the flags
+ @type flags: int
+ @ivar iterations: the number of iterations
+ @type iterations: int
+ @ivar salt: the salt
+ @type salt: string
+ @ivar next: the next name hash
+ @type next: string
+ @ivar windows: the windowed bitmap list
+ @type windows: list of (window number, string) tuples"""
+
+ __slots__ = ['algorithm', 'flags', 'iterations', 'salt', 'next', 'windows']
+
+ def __init__(self, rdclass, rdtype, algorithm, flags, iterations, salt,
+ next, windows):
+ super(NSEC3, self).__init__(rdclass, rdtype)
+ self.algorithm = algorithm
+ self.flags = flags
+ self.iterations = iterations
+ if isinstance(salt, text_type):
+ self.salt = salt.encode()
+ else:
+ self.salt = salt
+ self.next = next
+ self.windows = windows
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ next = base64.b32encode(self.next).translate(
+ b32_normal_to_hex).lower().decode()
+ if self.salt == b'':
+ salt = '-'
+ else:
+ salt = binascii.hexlify(self.salt).decode()
+ text = u''
+ for (window, bitmap) in self.windows:
+ bits = []
+ for i in xrange(0, len(bitmap)):
+ byte = bitmap[i]
+ for j in xrange(0, 8):
+ if byte & (0x80 >> j):
+ bits.append(dns.rdatatype.to_text(window * 256 +
+ i * 8 + j))
+ text += (u' ' + u' '.join(bits))
+ return u'%u %u %u %s %s%s' % (self.algorithm, self.flags,
+ self.iterations, salt, next, text)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ algorithm = tok.get_uint8()
+ flags = tok.get_uint8()
+ iterations = tok.get_uint16()
+ salt = tok.get_string()
+ if salt == u'-':
+ salt = b''
+ else:
+ salt = binascii.unhexlify(salt.encode('ascii'))
+ next = tok.get_string().encode(
+ 'ascii').upper().translate(b32_hex_to_normal)
+ next = base64.b32decode(next)
+ rdtypes = []
+ while 1:
+ token = tok.get().unescape()
+ if token.is_eol_or_eof():
+ break
+ nrdtype = dns.rdatatype.from_text(token.value)
+ if nrdtype == 0:
+ raise dns.exception.SyntaxError("NSEC3 with bit 0")
+ if nrdtype > 65535:
+ raise dns.exception.SyntaxError("NSEC3 with bit > 65535")
+ rdtypes.append(nrdtype)
+ rdtypes.sort()
+ window = 0
+ octets = 0
+ prior_rdtype = 0
+ bitmap = bytearray(b'\0' * 32)
+ windows = []
+ for nrdtype in rdtypes:
+ if nrdtype == prior_rdtype:
+ continue
+ prior_rdtype = nrdtype
+ new_window = nrdtype // 256
+ if new_window != window:
+ if octets != 0:
+ windows.append((window, bitmap[0:octets]))
+ bitmap = bytearray(b'\0' * 32)
+ window = new_window
+ offset = nrdtype % 256
+ byte = offset // 8
+ bit = offset % 8
+ octets = byte + 1
+ bitmap[byte] = bitmap[byte] | (0x80 >> bit)
+ if octets != 0:
+ windows.append((window, bitmap[0:octets]))
+ return cls(rdclass, rdtype, algorithm, flags, iterations, salt, next,
+ windows)
+
+ def to_wire(self, file, compress=None, origin=None):
+ l = len(self.salt)
+ file.write(struct.pack("!BBHB", self.algorithm, self.flags,
+ self.iterations, l))
+ file.write(self.salt)
+ l = len(self.next)
+ file.write(struct.pack("!B", l))
+ file.write(self.next)
+ for (window, bitmap) in self.windows:
+ file.write(struct.pack("!BB", window, len(bitmap)))
+ file.write(bitmap)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (algorithm, flags, iterations, slen) = \
+ struct.unpack('!BBHB', wire[current: current + 5])
+
+ current += 5
+ rdlen -= 5
+ salt = wire[current: current + slen].unwrap()
+ current += slen
+ rdlen -= slen
+ nlen = wire[current]
+ current += 1
+ rdlen -= 1
+ next = wire[current: current + nlen].unwrap()
+ current += nlen
+ rdlen -= nlen
+ windows = []
+ while rdlen > 0:
+ if rdlen < 3:
+ raise dns.exception.FormError("NSEC3 too short")
+ window = wire[current]
+ octets = wire[current + 1]
+ if octets == 0 or octets > 32:
+ raise dns.exception.FormError("bad NSEC3 octets")
+ current += 2
+ rdlen -= 2
+ if rdlen < octets:
+ raise dns.exception.FormError("bad NSEC3 bitmap length")
+ bitmap = bytearray(wire[current: current + octets].unwrap())
+ current += octets
+ rdlen -= octets
+ windows.append((window, bitmap))
+ return cls(rdclass, rdtype, algorithm, flags, iterations, salt, next,
+ windows)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC3PARAM.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC3PARAM.py
new file mode 100644
index 0000000000..87c36e5675
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/NSEC3PARAM.py
@@ -0,0 +1,90 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import binascii
+
+import dns.exception
+import dns.rdata
+from dns._compat import text_type
+
+
+class NSEC3PARAM(dns.rdata.Rdata):
+
+ """NSEC3PARAM record
+
+ @ivar algorithm: the hash algorithm number
+ @type algorithm: int
+ @ivar flags: the flags
+ @type flags: int
+ @ivar iterations: the number of iterations
+ @type iterations: int
+ @ivar salt: the salt
+ @type salt: string"""
+
+ __slots__ = ['algorithm', 'flags', 'iterations', 'salt']
+
+ def __init__(self, rdclass, rdtype, algorithm, flags, iterations, salt):
+ super(NSEC3PARAM, self).__init__(rdclass, rdtype)
+ self.algorithm = algorithm
+ self.flags = flags
+ self.iterations = iterations
+ if isinstance(salt, text_type):
+ self.salt = salt.encode()
+ else:
+ self.salt = salt
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ if self.salt == b'':
+ salt = '-'
+ else:
+ salt = binascii.hexlify(self.salt).decode()
+ return '%u %u %u %s' % (self.algorithm, self.flags, self.iterations,
+ salt)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ algorithm = tok.get_uint8()
+ flags = tok.get_uint8()
+ iterations = tok.get_uint16()
+ salt = tok.get_string()
+ if salt == '-':
+ salt = ''
+ else:
+ salt = binascii.unhexlify(salt.encode())
+ tok.get_eol()
+ return cls(rdclass, rdtype, algorithm, flags, iterations, salt)
+
+ def to_wire(self, file, compress=None, origin=None):
+ l = len(self.salt)
+ file.write(struct.pack("!BBHB", self.algorithm, self.flags,
+ self.iterations, l))
+ file.write(self.salt)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (algorithm, flags, iterations, slen) = \
+ struct.unpack('!BBHB',
+ wire[current: current + 5])
+ current += 5
+ rdlen -= 5
+ salt = wire[current: current + slen].unwrap()
+ current += slen
+ rdlen -= slen
+ if rdlen != 0:
+ raise dns.exception.FormError
+ return cls(rdclass, rdtype, algorithm, flags, iterations, salt)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/OPENPGPKEY.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/OPENPGPKEY.py
new file mode 100644
index 0000000000..a066cf98df
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/OPENPGPKEY.py
@@ -0,0 +1,60 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2016 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import base64
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+
+class OPENPGPKEY(dns.rdata.Rdata):
+
+ """OPENPGPKEY record
+
+ @ivar key: the key
+ @type key: bytes
+ @see: RFC 7929
+ """
+
+ def __init__(self, rdclass, rdtype, key):
+ super(OPENPGPKEY, self).__init__(rdclass, rdtype)
+ self.key = key
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return dns.rdata._base64ify(self.key)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ b64 = b''.join(chunks)
+ key = base64.b64decode(b64)
+ return cls(rdclass, rdtype, key)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(self.key)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ key = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, key)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/PTR.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/PTR.py
new file mode 100644
index 0000000000..20cd50761d
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/PTR.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.nsbase
+
+
+class PTR(dns.rdtypes.nsbase.NSBase):
+
+ """PTR record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/RP.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/RP.py
new file mode 100644
index 0000000000..8f07be9071
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/RP.py
@@ -0,0 +1,82 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.exception
+import dns.rdata
+import dns.name
+
+
+class RP(dns.rdata.Rdata):
+
+ """RP record
+
+ @ivar mbox: The responsible person's mailbox
+ @type mbox: dns.name.Name object
+ @ivar txt: The owner name of a node with TXT records, or the root name
+ if no TXT records are associated with this RP.
+ @type txt: dns.name.Name object
+ @see: RFC 1183"""
+
+ __slots__ = ['mbox', 'txt']
+
+ def __init__(self, rdclass, rdtype, mbox, txt):
+ super(RP, self).__init__(rdclass, rdtype)
+ self.mbox = mbox
+ self.txt = txt
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ mbox = self.mbox.choose_relativity(origin, relativize)
+ txt = self.txt.choose_relativity(origin, relativize)
+ return "{} {}".format(str(mbox), str(txt))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ mbox = tok.get_name()
+ txt = tok.get_name()
+ mbox = mbox.choose_relativity(origin, relativize)
+ txt = txt.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, mbox, txt)
+
+ def to_wire(self, file, compress=None, origin=None):
+ self.mbox.to_wire(file, None, origin)
+ self.txt.to_wire(file, None, origin)
+
+ def to_digestable(self, origin=None):
+ return self.mbox.to_digestable(origin) + \
+ self.txt.to_digestable(origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (mbox, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ current += cused
+ rdlen -= cused
+ if rdlen <= 0:
+ raise dns.exception.FormError
+ (txt, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ mbox = mbox.relativize(origin)
+ txt = txt.relativize(origin)
+ return cls(rdclass, rdtype, mbox, txt)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.mbox = self.mbox.choose_relativity(origin, relativize)
+ self.txt = self.txt.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/RRSIG.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/RRSIG.py
new file mode 100644
index 0000000000..d3756ece4e
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/RRSIG.py
@@ -0,0 +1,158 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import base64
+import calendar
+import struct
+import time
+
+import dns.dnssec
+import dns.exception
+import dns.rdata
+import dns.rdatatype
+
+
+class BadSigTime(dns.exception.DNSException):
+
+ """Time in DNS SIG or RRSIG resource record cannot be parsed."""
+
+
+def sigtime_to_posixtime(what):
+ if len(what) != 14:
+ raise BadSigTime
+ year = int(what[0:4])
+ month = int(what[4:6])
+ day = int(what[6:8])
+ hour = int(what[8:10])
+ minute = int(what[10:12])
+ second = int(what[12:14])
+ return calendar.timegm((year, month, day, hour, minute, second,
+ 0, 0, 0))
+
+
+def posixtime_to_sigtime(what):
+ return time.strftime('%Y%m%d%H%M%S', time.gmtime(what))
+
+
+class RRSIG(dns.rdata.Rdata):
+
+ """RRSIG record
+
+ @ivar type_covered: the rdata type this signature covers
+ @type type_covered: int
+ @ivar algorithm: the algorithm used for the sig
+ @type algorithm: int
+ @ivar labels: number of labels
+ @type labels: int
+ @ivar original_ttl: the original TTL
+ @type original_ttl: long
+ @ivar expiration: signature expiration time
+ @type expiration: long
+ @ivar inception: signature inception time
+ @type inception: long
+ @ivar key_tag: the key tag
+ @type key_tag: int
+ @ivar signer: the signer
+ @type signer: dns.name.Name object
+ @ivar signature: the signature
+ @type signature: string"""
+
+ __slots__ = ['type_covered', 'algorithm', 'labels', 'original_ttl',
+ 'expiration', 'inception', 'key_tag', 'signer',
+ 'signature']
+
+ def __init__(self, rdclass, rdtype, type_covered, algorithm, labels,
+ original_ttl, expiration, inception, key_tag, signer,
+ signature):
+ super(RRSIG, self).__init__(rdclass, rdtype)
+ self.type_covered = type_covered
+ self.algorithm = algorithm
+ self.labels = labels
+ self.original_ttl = original_ttl
+ self.expiration = expiration
+ self.inception = inception
+ self.key_tag = key_tag
+ self.signer = signer
+ self.signature = signature
+
+ def covers(self):
+ return self.type_covered
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%s %d %d %d %s %s %d %s %s' % (
+ dns.rdatatype.to_text(self.type_covered),
+ self.algorithm,
+ self.labels,
+ self.original_ttl,
+ posixtime_to_sigtime(self.expiration),
+ posixtime_to_sigtime(self.inception),
+ self.key_tag,
+ self.signer.choose_relativity(origin, relativize),
+ dns.rdata._base64ify(self.signature)
+ )
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ type_covered = dns.rdatatype.from_text(tok.get_string())
+ algorithm = dns.dnssec.algorithm_from_text(tok.get_string())
+ labels = tok.get_int()
+ original_ttl = tok.get_ttl()
+ expiration = sigtime_to_posixtime(tok.get_string())
+ inception = sigtime_to_posixtime(tok.get_string())
+ key_tag = tok.get_int()
+ signer = tok.get_name()
+ signer = signer.choose_relativity(origin, relativize)
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ b64 = b''.join(chunks)
+ signature = base64.b64decode(b64)
+ return cls(rdclass, rdtype, type_covered, algorithm, labels,
+ original_ttl, expiration, inception, key_tag, signer,
+ signature)
+
+ def to_wire(self, file, compress=None, origin=None):
+ header = struct.pack('!HBBIIIH', self.type_covered,
+ self.algorithm, self.labels,
+ self.original_ttl, self.expiration,
+ self.inception, self.key_tag)
+ file.write(header)
+ self.signer.to_wire(file, None, origin)
+ file.write(self.signature)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ header = struct.unpack('!HBBIIIH', wire[current: current + 18])
+ current += 18
+ rdlen -= 18
+ (signer, cused) = dns.name.from_wire(wire[: current + rdlen], current)
+ current += cused
+ rdlen -= cused
+ if origin is not None:
+ signer = signer.relativize(origin)
+ signature = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, header[0], header[1], header[2],
+ header[3], header[4], header[5], header[6], signer,
+ signature)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.signer = self.signer.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/RT.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/RT.py
new file mode 100644
index 0000000000..d0feb79e9d
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/RT.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.mxbase
+
+
+class RT(dns.rdtypes.mxbase.UncompressedDowncasingMX):
+
+ """RT record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/SOA.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/SOA.py
new file mode 100644
index 0000000000..aec81cad8a
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/SOA.py
@@ -0,0 +1,116 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.name
+
+
+class SOA(dns.rdata.Rdata):
+
+ """SOA record
+
+ @ivar mname: the SOA MNAME (master name) field
+ @type mname: dns.name.Name object
+ @ivar rname: the SOA RNAME (responsible name) field
+ @type rname: dns.name.Name object
+ @ivar serial: The zone's serial number
+ @type serial: int
+ @ivar refresh: The zone's refresh value (in seconds)
+ @type refresh: int
+ @ivar retry: The zone's retry value (in seconds)
+ @type retry: int
+ @ivar expire: The zone's expiration value (in seconds)
+ @type expire: int
+ @ivar minimum: The zone's negative caching time (in seconds, called
+ "minimum" for historical reasons)
+ @type minimum: int
+ @see: RFC 1035"""
+
+ __slots__ = ['mname', 'rname', 'serial', 'refresh', 'retry', 'expire',
+ 'minimum']
+
+ def __init__(self, rdclass, rdtype, mname, rname, serial, refresh, retry,
+ expire, minimum):
+ super(SOA, self).__init__(rdclass, rdtype)
+ self.mname = mname
+ self.rname = rname
+ self.serial = serial
+ self.refresh = refresh
+ self.retry = retry
+ self.expire = expire
+ self.minimum = minimum
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ mname = self.mname.choose_relativity(origin, relativize)
+ rname = self.rname.choose_relativity(origin, relativize)
+ return '%s %s %d %d %d %d %d' % (
+ mname, rname, self.serial, self.refresh, self.retry,
+ self.expire, self.minimum)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ mname = tok.get_name()
+ rname = tok.get_name()
+ mname = mname.choose_relativity(origin, relativize)
+ rname = rname.choose_relativity(origin, relativize)
+ serial = tok.get_uint32()
+ refresh = tok.get_ttl()
+ retry = tok.get_ttl()
+ expire = tok.get_ttl()
+ minimum = tok.get_ttl()
+ tok.get_eol()
+ return cls(rdclass, rdtype, mname, rname, serial, refresh, retry,
+ expire, minimum)
+
+ def to_wire(self, file, compress=None, origin=None):
+ self.mname.to_wire(file, compress, origin)
+ self.rname.to_wire(file, compress, origin)
+ five_ints = struct.pack('!IIIII', self.serial, self.refresh,
+ self.retry, self.expire, self.minimum)
+ file.write(five_ints)
+
+ def to_digestable(self, origin=None):
+ return self.mname.to_digestable(origin) + \
+ self.rname.to_digestable(origin) + \
+ struct.pack('!IIIII', self.serial, self.refresh,
+ self.retry, self.expire, self.minimum)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (mname, cused) = dns.name.from_wire(wire[: current + rdlen], current)
+ current += cused
+ rdlen -= cused
+ (rname, cused) = dns.name.from_wire(wire[: current + rdlen], current)
+ current += cused
+ rdlen -= cused
+ if rdlen != 20:
+ raise dns.exception.FormError
+ five_ints = struct.unpack('!IIIII',
+ wire[current: current + rdlen])
+ if origin is not None:
+ mname = mname.relativize(origin)
+ rname = rname.relativize(origin)
+ return cls(rdclass, rdtype, mname, rname,
+ five_ints[0], five_ints[1], five_ints[2], five_ints[3],
+ five_ints[4])
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.mname = self.mname.choose_relativity(origin, relativize)
+ self.rname = self.rname.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/SPF.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/SPF.py
new file mode 100644
index 0000000000..41dee62387
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/SPF.py
@@ -0,0 +1,25 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2006, 2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.txtbase
+
+
+class SPF(dns.rdtypes.txtbase.TXTBase):
+
+ """SPF record
+
+ @see: RFC 4408"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/SSHFP.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/SSHFP.py
new file mode 100644
index 0000000000..c18311e906
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/SSHFP.py
@@ -0,0 +1,79 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2005-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import binascii
+
+import dns.rdata
+import dns.rdatatype
+
+
+class SSHFP(dns.rdata.Rdata):
+
+ """SSHFP record
+
+ @ivar algorithm: the algorithm
+ @type algorithm: int
+ @ivar fp_type: the digest type
+ @type fp_type: int
+ @ivar fingerprint: the fingerprint
+ @type fingerprint: string
+ @see: draft-ietf-secsh-dns-05.txt"""
+
+ __slots__ = ['algorithm', 'fp_type', 'fingerprint']
+
+ def __init__(self, rdclass, rdtype, algorithm, fp_type,
+ fingerprint):
+ super(SSHFP, self).__init__(rdclass, rdtype)
+ self.algorithm = algorithm
+ self.fp_type = fp_type
+ self.fingerprint = fingerprint
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%d %d %s' % (self.algorithm,
+ self.fp_type,
+ dns.rdata._hexify(self.fingerprint,
+ chunksize=128))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ algorithm = tok.get_uint8()
+ fp_type = tok.get_uint8()
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ fingerprint = b''.join(chunks)
+ fingerprint = binascii.unhexlify(fingerprint)
+ return cls(rdclass, rdtype, algorithm, fp_type, fingerprint)
+
+ def to_wire(self, file, compress=None, origin=None):
+ header = struct.pack("!BB", self.algorithm, self.fp_type)
+ file.write(header)
+ file.write(self.fingerprint)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ header = struct.unpack("!BB", wire[current: current + 2])
+ current += 2
+ rdlen -= 2
+ fingerprint = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, header[0], header[1], fingerprint)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/TLSA.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/TLSA.py
new file mode 100644
index 0000000000..a135c2b3da
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/TLSA.py
@@ -0,0 +1,84 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2005-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import binascii
+
+import dns.rdata
+import dns.rdatatype
+
+
+class TLSA(dns.rdata.Rdata):
+
+ """TLSA record
+
+ @ivar usage: The certificate usage
+ @type usage: int
+ @ivar selector: The selector field
+ @type selector: int
+ @ivar mtype: The 'matching type' field
+ @type mtype: int
+ @ivar cert: The 'Certificate Association Data' field
+ @type cert: string
+ @see: RFC 6698"""
+
+ __slots__ = ['usage', 'selector', 'mtype', 'cert']
+
+ def __init__(self, rdclass, rdtype, usage, selector,
+ mtype, cert):
+ super(TLSA, self).__init__(rdclass, rdtype)
+ self.usage = usage
+ self.selector = selector
+ self.mtype = mtype
+ self.cert = cert
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%d %d %d %s' % (self.usage,
+ self.selector,
+ self.mtype,
+ dns.rdata._hexify(self.cert,
+ chunksize=128))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ usage = tok.get_uint8()
+ selector = tok.get_uint8()
+ mtype = tok.get_uint8()
+ cert_chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ cert_chunks.append(t.value.encode())
+ cert = b''.join(cert_chunks)
+ cert = binascii.unhexlify(cert)
+ return cls(rdclass, rdtype, usage, selector, mtype, cert)
+
+ def to_wire(self, file, compress=None, origin=None):
+ header = struct.pack("!BBB", self.usage, self.selector, self.mtype)
+ file.write(header)
+ file.write(self.cert)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ header = struct.unpack("!BBB", wire[current: current + 3])
+ current += 3
+ rdlen -= 3
+ cert = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, header[0], header[1], header[2], cert)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/TXT.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/TXT.py
new file mode 100644
index 0000000000..c5ae919c5e
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/TXT.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.txtbase
+
+
+class TXT(dns.rdtypes.txtbase.TXTBase):
+
+ """TXT record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/URI.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/URI.py
new file mode 100644
index 0000000000..f5b65ed6a9
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/URI.py
@@ -0,0 +1,82 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+# Copyright (C) 2015 Red Hat, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.name
+from dns._compat import text_type
+
+
+class URI(dns.rdata.Rdata):
+
+ """URI record
+
+ @ivar priority: the priority
+ @type priority: int
+ @ivar weight: the weight
+ @type weight: int
+ @ivar target: the target host
+ @type target: dns.name.Name object
+ @see: draft-faltstrom-uri-13"""
+
+ __slots__ = ['priority', 'weight', 'target']
+
+ def __init__(self, rdclass, rdtype, priority, weight, target):
+ super(URI, self).__init__(rdclass, rdtype)
+ self.priority = priority
+ self.weight = weight
+ if len(target) < 1:
+ raise dns.exception.SyntaxError("URI target cannot be empty")
+ if isinstance(target, text_type):
+ self.target = target.encode()
+ else:
+ self.target = target
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%d %d "%s"' % (self.priority, self.weight,
+ self.target.decode())
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ priority = tok.get_uint16()
+ weight = tok.get_uint16()
+ target = tok.get().unescape()
+ if not (target.is_quoted_string() or target.is_identifier()):
+ raise dns.exception.SyntaxError("URI target must be a string")
+ tok.get_eol()
+ return cls(rdclass, rdtype, priority, weight, target.value)
+
+ def to_wire(self, file, compress=None, origin=None):
+ two_ints = struct.pack("!HH", self.priority, self.weight)
+ file.write(two_ints)
+ file.write(self.target)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ if rdlen < 5:
+ raise dns.exception.FormError('URI RR is shorter than 5 octets')
+
+ (priority, weight) = struct.unpack('!HH', wire[current: current + 4])
+ current += 4
+ rdlen -= 4
+ target = wire[current: current + rdlen]
+ current += rdlen
+
+ return cls(rdclass, rdtype, priority, weight, target)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/X25.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/X25.py
new file mode 100644
index 0000000000..e530a2c2a6
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/X25.py
@@ -0,0 +1,66 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+from dns._compat import text_type
+
+
+class X25(dns.rdata.Rdata):
+
+ """X25 record
+
+ @ivar address: the PSDN address
+ @type address: string
+ @see: RFC 1183"""
+
+ __slots__ = ['address']
+
+ def __init__(self, rdclass, rdtype, address):
+ super(X25, self).__init__(rdclass, rdtype)
+ if isinstance(address, text_type):
+ self.address = address.encode()
+ else:
+ self.address = address
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '"%s"' % dns.rdata._escapify(self.address)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ address = tok.get_string()
+ tok.get_eol()
+ return cls(rdclass, rdtype, address)
+
+ def to_wire(self, file, compress=None, origin=None):
+ l = len(self.address)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(self.address)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l != rdlen:
+ raise dns.exception.FormError
+ address = wire[current: current + l].unwrap()
+ return cls(rdclass, rdtype, address)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/ANY/__init__.py b/openpype/vendor/python/python_2/dns/rdtypes/ANY/__init__.py
new file mode 100644
index 0000000000..ca41ef8055
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/ANY/__init__.py
@@ -0,0 +1,57 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Class ANY (generic) rdata type classes."""
+
+__all__ = [
+ 'AFSDB',
+ 'AVC',
+ 'CAA',
+ 'CDNSKEY',
+ 'CDS',
+ 'CERT',
+ 'CNAME',
+ 'CSYNC',
+ 'DLV',
+ 'DNAME',
+ 'DNSKEY',
+ 'DS',
+ 'EUI48',
+ 'EUI64',
+ 'GPOS',
+ 'HINFO',
+ 'HIP',
+ 'ISDN',
+ 'LOC',
+ 'MX',
+ 'NS',
+ 'NSEC',
+ 'NSEC3',
+ 'NSEC3PARAM',
+ 'OPENPGPKEY',
+ 'PTR',
+ 'RP',
+ 'RRSIG',
+ 'RT',
+ 'SOA',
+ 'SPF',
+ 'SSHFP',
+ 'TLSA',
+ 'TXT',
+ 'URI',
+ 'X25',
+]
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/CH/A.py b/openpype/vendor/python/python_2/dns/rdtypes/CH/A.py
new file mode 100644
index 0000000000..e65d192d82
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/CH/A.py
@@ -0,0 +1,70 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.mxbase
+import struct
+
+class A(dns.rdtypes.mxbase.MXBase):
+
+ """A record for Chaosnet
+ @ivar domain: the domain of the address
+ @type domain: dns.name.Name object
+ @ivar address: the 16-bit address
+ @type address: int"""
+
+ __slots__ = ['domain', 'address']
+
+ def __init__(self, rdclass, rdtype, address, domain):
+ super(A, self).__init__(rdclass, rdtype, address, domain)
+ self.domain = domain
+ self.address = address
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ domain = self.domain.choose_relativity(origin, relativize)
+ return '%s %o' % (domain, self.address)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ domain = tok.get_name()
+ address = tok.get_uint16(base=8)
+ domain = domain.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, address, domain)
+
+ def to_wire(self, file, compress=None, origin=None):
+ self.domain.to_wire(file, compress, origin)
+ pref = struct.pack("!H", self.address)
+ file.write(pref)
+
+ def to_digestable(self, origin=None):
+ return self.domain.to_digestable(origin) + \
+ struct.pack("!H", self.address)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (domain, cused) = dns.name.from_wire(wire[: current + rdlen-2],
+ current)
+ current += cused
+ (address,) = struct.unpack('!H', wire[current: current + 2])
+ if cused+2 != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ domain = domain.relativize(origin)
+ return cls(rdclass, rdtype, address, domain)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.domain = self.domain.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/CH/__init__.py b/openpype/vendor/python/python_2/dns/rdtypes/CH/__init__.py
new file mode 100644
index 0000000000..7184a7332a
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/CH/__init__.py
@@ -0,0 +1,22 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Class CH rdata type classes."""
+
+__all__ = [
+ 'A',
+]
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/A.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/A.py
new file mode 100644
index 0000000000..8998982462
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/A.py
@@ -0,0 +1,54 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.exception
+import dns.ipv4
+import dns.rdata
+import dns.tokenizer
+
+
+class A(dns.rdata.Rdata):
+
+ """A record.
+
+ @ivar address: an IPv4 address
+ @type address: string (in the standard "dotted quad" format)"""
+
+ __slots__ = ['address']
+
+ def __init__(self, rdclass, rdtype, address):
+ super(A, self).__init__(rdclass, rdtype)
+ # check that it's OK
+ dns.ipv4.inet_aton(address)
+ self.address = address
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return self.address
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ address = tok.get_identifier()
+ tok.get_eol()
+ return cls(rdclass, rdtype, address)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(dns.ipv4.inet_aton(self.address))
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ address = dns.ipv4.inet_ntoa(wire[current: current + rdlen])
+ return cls(rdclass, rdtype, address)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/AAAA.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/AAAA.py
new file mode 100644
index 0000000000..a77c5bf2a5
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/AAAA.py
@@ -0,0 +1,55 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.exception
+import dns.inet
+import dns.rdata
+import dns.tokenizer
+
+
+class AAAA(dns.rdata.Rdata):
+
+ """AAAA record.
+
+ @ivar address: an IPv6 address
+ @type address: string (in the standard IPv6 format)"""
+
+ __slots__ = ['address']
+
+ def __init__(self, rdclass, rdtype, address):
+ super(AAAA, self).__init__(rdclass, rdtype)
+ # check that it's OK
+ dns.inet.inet_pton(dns.inet.AF_INET6, address)
+ self.address = address
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return self.address
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ address = tok.get_identifier()
+ tok.get_eol()
+ return cls(rdclass, rdtype, address)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(dns.inet.inet_pton(dns.inet.AF_INET6, self.address))
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ address = dns.inet.inet_ntop(dns.inet.AF_INET6,
+ wire[current: current + rdlen])
+ return cls(rdclass, rdtype, address)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/APL.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/APL.py
new file mode 100644
index 0000000000..48faf88ab7
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/APL.py
@@ -0,0 +1,165 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import binascii
+import codecs
+import struct
+
+import dns.exception
+import dns.inet
+import dns.rdata
+import dns.tokenizer
+from dns._compat import xrange, maybe_chr
+
+
+class APLItem(object):
+
+ """An APL list item.
+
+ @ivar family: the address family (IANA address family registry)
+ @type family: int
+ @ivar negation: is this item negated?
+ @type negation: bool
+ @ivar address: the address
+ @type address: string
+ @ivar prefix: the prefix length
+ @type prefix: int
+ """
+
+ __slots__ = ['family', 'negation', 'address', 'prefix']
+
+ def __init__(self, family, negation, address, prefix):
+ self.family = family
+ self.negation = negation
+ self.address = address
+ self.prefix = prefix
+
+ def __str__(self):
+ if self.negation:
+ return "!%d:%s/%s" % (self.family, self.address, self.prefix)
+ else:
+ return "%d:%s/%s" % (self.family, self.address, self.prefix)
+
+ def to_wire(self, file):
+ if self.family == 1:
+ address = dns.inet.inet_pton(dns.inet.AF_INET, self.address)
+ elif self.family == 2:
+ address = dns.inet.inet_pton(dns.inet.AF_INET6, self.address)
+ else:
+ address = binascii.unhexlify(self.address)
+ #
+ # Truncate least significant zero bytes.
+ #
+ last = 0
+ for i in xrange(len(address) - 1, -1, -1):
+ if address[i] != maybe_chr(0):
+ last = i + 1
+ break
+ address = address[0: last]
+ l = len(address)
+ assert l < 128
+ if self.negation:
+ l |= 0x80
+ header = struct.pack('!HBB', self.family, self.prefix, l)
+ file.write(header)
+ file.write(address)
+
+
+class APL(dns.rdata.Rdata):
+
+ """APL record.
+
+ @ivar items: a list of APL items
+ @type items: list of APL_Item
+ @see: RFC 3123"""
+
+ __slots__ = ['items']
+
+ def __init__(self, rdclass, rdtype, items):
+ super(APL, self).__init__(rdclass, rdtype)
+ self.items = items
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return ' '.join(map(str, self.items))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ items = []
+ while 1:
+ token = tok.get().unescape()
+ if token.is_eol_or_eof():
+ break
+ item = token.value
+ if item[0] == '!':
+ negation = True
+ item = item[1:]
+ else:
+ negation = False
+ (family, rest) = item.split(':', 1)
+ family = int(family)
+ (address, prefix) = rest.split('/', 1)
+ prefix = int(prefix)
+ item = APLItem(family, negation, address, prefix)
+ items.append(item)
+
+ return cls(rdclass, rdtype, items)
+
+ def to_wire(self, file, compress=None, origin=None):
+ for item in self.items:
+ item.to_wire(file)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+
+ items = []
+ while 1:
+ if rdlen == 0:
+ break
+ if rdlen < 4:
+ raise dns.exception.FormError
+ header = struct.unpack('!HBB', wire[current: current + 4])
+ afdlen = header[2]
+ if afdlen > 127:
+ negation = True
+ afdlen -= 128
+ else:
+ negation = False
+ current += 4
+ rdlen -= 4
+ if rdlen < afdlen:
+ raise dns.exception.FormError
+ address = wire[current: current + afdlen].unwrap()
+ l = len(address)
+ if header[0] == 1:
+ if l < 4:
+ address += b'\x00' * (4 - l)
+ address = dns.inet.inet_ntop(dns.inet.AF_INET, address)
+ elif header[0] == 2:
+ if l < 16:
+ address += b'\x00' * (16 - l)
+ address = dns.inet.inet_ntop(dns.inet.AF_INET6, address)
+ else:
+ #
+ # This isn't really right according to the RFC, but it
+ # seems better than throwing an exception
+ #
+ address = codecs.encode(address, 'hex_codec')
+ current += afdlen
+ rdlen -= afdlen
+ item = APLItem(header[0], negation, address, header[1])
+ items.append(item)
+ return cls(rdclass, rdtype, items)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/DHCID.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/DHCID.py
new file mode 100644
index 0000000000..cec64590f0
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/DHCID.py
@@ -0,0 +1,61 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2006, 2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import base64
+
+import dns.exception
+
+
+class DHCID(dns.rdata.Rdata):
+
+ """DHCID record
+
+ @ivar data: the data (the content of the RR is opaque as far as the
+ DNS is concerned)
+ @type data: string
+ @see: RFC 4701"""
+
+ __slots__ = ['data']
+
+ def __init__(self, rdclass, rdtype, data):
+ super(DHCID, self).__init__(rdclass, rdtype)
+ self.data = data
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return dns.rdata._base64ify(self.data)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ b64 = b''.join(chunks)
+ data = base64.b64decode(b64)
+ return cls(rdclass, rdtype, data)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(self.data)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ data = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, data)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/IPSECKEY.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/IPSECKEY.py
new file mode 100644
index 0000000000..8f49ba137d
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/IPSECKEY.py
@@ -0,0 +1,150 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2006, 2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import base64
+
+import dns.exception
+import dns.inet
+import dns.name
+
+
+class IPSECKEY(dns.rdata.Rdata):
+
+ """IPSECKEY record
+
+ @ivar precedence: the precedence for this key data
+ @type precedence: int
+ @ivar gateway_type: the gateway type
+ @type gateway_type: int
+ @ivar algorithm: the algorithm to use
+ @type algorithm: int
+ @ivar gateway: the public key
+ @type gateway: None, IPv4 address, IPV6 address, or domain name
+ @ivar key: the public key
+ @type key: string
+ @see: RFC 4025"""
+
+ __slots__ = ['precedence', 'gateway_type', 'algorithm', 'gateway', 'key']
+
+ def __init__(self, rdclass, rdtype, precedence, gateway_type, algorithm,
+ gateway, key):
+ super(IPSECKEY, self).__init__(rdclass, rdtype)
+ if gateway_type == 0:
+ if gateway != '.' and gateway is not None:
+ raise SyntaxError('invalid gateway for gateway type 0')
+ gateway = None
+ elif gateway_type == 1:
+ # check that it's OK
+ dns.inet.inet_pton(dns.inet.AF_INET, gateway)
+ elif gateway_type == 2:
+ # check that it's OK
+ dns.inet.inet_pton(dns.inet.AF_INET6, gateway)
+ elif gateway_type == 3:
+ pass
+ else:
+ raise SyntaxError(
+ 'invalid IPSECKEY gateway type: %d' % gateway_type)
+ self.precedence = precedence
+ self.gateway_type = gateway_type
+ self.algorithm = algorithm
+ self.gateway = gateway
+ self.key = key
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ if self.gateway_type == 0:
+ gateway = '.'
+ elif self.gateway_type == 1:
+ gateway = self.gateway
+ elif self.gateway_type == 2:
+ gateway = self.gateway
+ elif self.gateway_type == 3:
+ gateway = str(self.gateway.choose_relativity(origin, relativize))
+ else:
+ raise ValueError('invalid gateway type')
+ return '%d %d %d %s %s' % (self.precedence, self.gateway_type,
+ self.algorithm, gateway,
+ dns.rdata._base64ify(self.key))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ precedence = tok.get_uint8()
+ gateway_type = tok.get_uint8()
+ algorithm = tok.get_uint8()
+ if gateway_type == 3:
+ gateway = tok.get_name().choose_relativity(origin, relativize)
+ else:
+ gateway = tok.get_string()
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ b64 = b''.join(chunks)
+ key = base64.b64decode(b64)
+ return cls(rdclass, rdtype, precedence, gateway_type, algorithm,
+ gateway, key)
+
+ def to_wire(self, file, compress=None, origin=None):
+ header = struct.pack("!BBB", self.precedence, self.gateway_type,
+ self.algorithm)
+ file.write(header)
+ if self.gateway_type == 0:
+ pass
+ elif self.gateway_type == 1:
+ file.write(dns.inet.inet_pton(dns.inet.AF_INET, self.gateway))
+ elif self.gateway_type == 2:
+ file.write(dns.inet.inet_pton(dns.inet.AF_INET6, self.gateway))
+ elif self.gateway_type == 3:
+ self.gateway.to_wire(file, None, origin)
+ else:
+ raise ValueError('invalid gateway type')
+ file.write(self.key)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ if rdlen < 3:
+ raise dns.exception.FormError
+ header = struct.unpack('!BBB', wire[current: current + 3])
+ gateway_type = header[1]
+ current += 3
+ rdlen -= 3
+ if gateway_type == 0:
+ gateway = None
+ elif gateway_type == 1:
+ gateway = dns.inet.inet_ntop(dns.inet.AF_INET,
+ wire[current: current + 4])
+ current += 4
+ rdlen -= 4
+ elif gateway_type == 2:
+ gateway = dns.inet.inet_ntop(dns.inet.AF_INET6,
+ wire[current: current + 16])
+ current += 16
+ rdlen -= 16
+ elif gateway_type == 3:
+ (gateway, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ current += cused
+ rdlen -= cused
+ else:
+ raise dns.exception.FormError('invalid IPSECKEY gateway type')
+ key = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, header[0], gateway_type, header[2],
+ gateway, key)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/KX.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/KX.py
new file mode 100644
index 0000000000..1318a582e7
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/KX.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.mxbase
+
+
+class KX(dns.rdtypes.mxbase.UncompressedMX):
+
+ """KX record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/NAPTR.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/NAPTR.py
new file mode 100644
index 0000000000..32fa4745ea
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/NAPTR.py
@@ -0,0 +1,127 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.name
+import dns.rdata
+from dns._compat import xrange, text_type
+
+
+def _write_string(file, s):
+ l = len(s)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(s)
+
+
+def _sanitize(value):
+ if isinstance(value, text_type):
+ return value.encode()
+ return value
+
+
+class NAPTR(dns.rdata.Rdata):
+
+ """NAPTR record
+
+ @ivar order: order
+ @type order: int
+ @ivar preference: preference
+ @type preference: int
+ @ivar flags: flags
+ @type flags: string
+ @ivar service: service
+ @type service: string
+ @ivar regexp: regular expression
+ @type regexp: string
+ @ivar replacement: replacement name
+ @type replacement: dns.name.Name object
+ @see: RFC 3403"""
+
+ __slots__ = ['order', 'preference', 'flags', 'service', 'regexp',
+ 'replacement']
+
+ def __init__(self, rdclass, rdtype, order, preference, flags, service,
+ regexp, replacement):
+ super(NAPTR, self).__init__(rdclass, rdtype)
+ self.flags = _sanitize(flags)
+ self.service = _sanitize(service)
+ self.regexp = _sanitize(regexp)
+ self.order = order
+ self.preference = preference
+ self.replacement = replacement
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ replacement = self.replacement.choose_relativity(origin, relativize)
+ return '%d %d "%s" "%s" "%s" %s' % \
+ (self.order, self.preference,
+ dns.rdata._escapify(self.flags),
+ dns.rdata._escapify(self.service),
+ dns.rdata._escapify(self.regexp),
+ replacement)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ order = tok.get_uint16()
+ preference = tok.get_uint16()
+ flags = tok.get_string()
+ service = tok.get_string()
+ regexp = tok.get_string()
+ replacement = tok.get_name()
+ replacement = replacement.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, order, preference, flags, service,
+ regexp, replacement)
+
+ def to_wire(self, file, compress=None, origin=None):
+ two_ints = struct.pack("!HH", self.order, self.preference)
+ file.write(two_ints)
+ _write_string(file, self.flags)
+ _write_string(file, self.service)
+ _write_string(file, self.regexp)
+ self.replacement.to_wire(file, compress, origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (order, preference) = struct.unpack('!HH', wire[current: current + 4])
+ current += 4
+ rdlen -= 4
+ strings = []
+ for i in xrange(3):
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l > rdlen or rdlen < 0:
+ raise dns.exception.FormError
+ s = wire[current: current + l].unwrap()
+ current += l
+ rdlen -= l
+ strings.append(s)
+ (replacement, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ replacement = replacement.relativize(origin)
+ return cls(rdclass, rdtype, order, preference, strings[0], strings[1],
+ strings[2], replacement)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.replacement = self.replacement.choose_relativity(origin,
+ relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/NSAP.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/NSAP.py
new file mode 100644
index 0000000000..336befc7f2
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/NSAP.py
@@ -0,0 +1,60 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import binascii
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+
+
+class NSAP(dns.rdata.Rdata):
+
+ """NSAP record.
+
+ @ivar address: a NASP
+ @type address: string
+ @see: RFC 1706"""
+
+ __slots__ = ['address']
+
+ def __init__(self, rdclass, rdtype, address):
+ super(NSAP, self).__init__(rdclass, rdtype)
+ self.address = address
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return "0x%s" % binascii.hexlify(self.address).decode()
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ address = tok.get_string()
+ tok.get_eol()
+ if address[0:2] != '0x':
+ raise dns.exception.SyntaxError('string does not start with 0x')
+ address = address[2:].replace('.', '')
+ if len(address) % 2 != 0:
+ raise dns.exception.SyntaxError('hexstring has odd length')
+ address = binascii.unhexlify(address.encode())
+ return cls(rdclass, rdtype, address)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(self.address)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ address = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, address)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/NSAP_PTR.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/NSAP_PTR.py
new file mode 100644
index 0000000000..a5b66c803f
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/NSAP_PTR.py
@@ -0,0 +1,23 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import dns.rdtypes.nsbase
+
+
+class NSAP_PTR(dns.rdtypes.nsbase.UncompressedNS):
+
+ """NSAP-PTR record"""
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/PX.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/PX.py
new file mode 100644
index 0000000000..2dbaee6ce8
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/PX.py
@@ -0,0 +1,89 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.name
+
+
+class PX(dns.rdata.Rdata):
+
+ """PX record.
+
+ @ivar preference: the preference value
+ @type preference: int
+ @ivar map822: the map822 name
+ @type map822: dns.name.Name object
+ @ivar mapx400: the mapx400 name
+ @type mapx400: dns.name.Name object
+ @see: RFC 2163"""
+
+ __slots__ = ['preference', 'map822', 'mapx400']
+
+ def __init__(self, rdclass, rdtype, preference, map822, mapx400):
+ super(PX, self).__init__(rdclass, rdtype)
+ self.preference = preference
+ self.map822 = map822
+ self.mapx400 = mapx400
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ map822 = self.map822.choose_relativity(origin, relativize)
+ mapx400 = self.mapx400.choose_relativity(origin, relativize)
+ return '%d %s %s' % (self.preference, map822, mapx400)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ preference = tok.get_uint16()
+ map822 = tok.get_name()
+ map822 = map822.choose_relativity(origin, relativize)
+ mapx400 = tok.get_name(None)
+ mapx400 = mapx400.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, preference, map822, mapx400)
+
+ def to_wire(self, file, compress=None, origin=None):
+ pref = struct.pack("!H", self.preference)
+ file.write(pref)
+ self.map822.to_wire(file, None, origin)
+ self.mapx400.to_wire(file, None, origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (preference, ) = struct.unpack('!H', wire[current: current + 2])
+ current += 2
+ rdlen -= 2
+ (map822, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused > rdlen:
+ raise dns.exception.FormError
+ current += cused
+ rdlen -= cused
+ if origin is not None:
+ map822 = map822.relativize(origin)
+ (mapx400, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ mapx400 = mapx400.relativize(origin)
+ return cls(rdclass, rdtype, preference, map822, mapx400)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.map822 = self.map822.choose_relativity(origin, relativize)
+ self.mapx400 = self.mapx400.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/SRV.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/SRV.py
new file mode 100644
index 0000000000..b2c1bc9f0b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/SRV.py
@@ -0,0 +1,83 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.name
+
+
+class SRV(dns.rdata.Rdata):
+
+ """SRV record
+
+ @ivar priority: the priority
+ @type priority: int
+ @ivar weight: the weight
+ @type weight: int
+ @ivar port: the port of the service
+ @type port: int
+ @ivar target: the target host
+ @type target: dns.name.Name object
+ @see: RFC 2782"""
+
+ __slots__ = ['priority', 'weight', 'port', 'target']
+
+ def __init__(self, rdclass, rdtype, priority, weight, port, target):
+ super(SRV, self).__init__(rdclass, rdtype)
+ self.priority = priority
+ self.weight = weight
+ self.port = port
+ self.target = target
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ target = self.target.choose_relativity(origin, relativize)
+ return '%d %d %d %s' % (self.priority, self.weight, self.port,
+ target)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ priority = tok.get_uint16()
+ weight = tok.get_uint16()
+ port = tok.get_uint16()
+ target = tok.get_name(None)
+ target = target.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, priority, weight, port, target)
+
+ def to_wire(self, file, compress=None, origin=None):
+ three_ints = struct.pack("!HHH", self.priority, self.weight, self.port)
+ file.write(three_ints)
+ self.target.to_wire(file, compress, origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (priority, weight, port) = struct.unpack('!HHH',
+ wire[current: current + 6])
+ current += 6
+ rdlen -= 6
+ (target, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ target = target.relativize(origin)
+ return cls(rdclass, rdtype, priority, weight, port, target)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.target = self.target.choose_relativity(origin, relativize)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/WKS.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/WKS.py
new file mode 100644
index 0000000000..96f98ada70
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/WKS.py
@@ -0,0 +1,107 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import socket
+import struct
+
+import dns.ipv4
+import dns.rdata
+from dns._compat import xrange
+
+_proto_tcp = socket.getprotobyname('tcp')
+_proto_udp = socket.getprotobyname('udp')
+
+
+class WKS(dns.rdata.Rdata):
+
+ """WKS record
+
+ @ivar address: the address
+ @type address: string
+ @ivar protocol: the protocol
+ @type protocol: int
+ @ivar bitmap: the bitmap
+ @type bitmap: string
+ @see: RFC 1035"""
+
+ __slots__ = ['address', 'protocol', 'bitmap']
+
+ def __init__(self, rdclass, rdtype, address, protocol, bitmap):
+ super(WKS, self).__init__(rdclass, rdtype)
+ self.address = address
+ self.protocol = protocol
+ if not isinstance(bitmap, bytearray):
+ self.bitmap = bytearray(bitmap)
+ else:
+ self.bitmap = bitmap
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ bits = []
+ for i in xrange(0, len(self.bitmap)):
+ byte = self.bitmap[i]
+ for j in xrange(0, 8):
+ if byte & (0x80 >> j):
+ bits.append(str(i * 8 + j))
+ text = ' '.join(bits)
+ return '%s %d %s' % (self.address, self.protocol, text)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ address = tok.get_string()
+ protocol = tok.get_string()
+ if protocol.isdigit():
+ protocol = int(protocol)
+ else:
+ protocol = socket.getprotobyname(protocol)
+ bitmap = bytearray()
+ while 1:
+ token = tok.get().unescape()
+ if token.is_eol_or_eof():
+ break
+ if token.value.isdigit():
+ serv = int(token.value)
+ else:
+ if protocol != _proto_udp and protocol != _proto_tcp:
+ raise NotImplementedError("protocol must be TCP or UDP")
+ if protocol == _proto_udp:
+ protocol_text = "udp"
+ else:
+ protocol_text = "tcp"
+ serv = socket.getservbyname(token.value, protocol_text)
+ i = serv // 8
+ l = len(bitmap)
+ if l < i + 1:
+ for j in xrange(l, i + 1):
+ bitmap.append(0)
+ bitmap[i] = bitmap[i] | (0x80 >> (serv % 8))
+ bitmap = dns.rdata._truncate_bitmap(bitmap)
+ return cls(rdclass, rdtype, address, protocol, bitmap)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(dns.ipv4.inet_aton(self.address))
+ protocol = struct.pack('!B', self.protocol)
+ file.write(protocol)
+ file.write(self.bitmap)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ address = dns.ipv4.inet_ntoa(wire[current: current + 4])
+ protocol, = struct.unpack('!B', wire[current + 4: current + 5])
+ current += 5
+ rdlen -= 5
+ bitmap = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, address, protocol, bitmap)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/IN/__init__.py b/openpype/vendor/python/python_2/dns/rdtypes/IN/__init__.py
new file mode 100644
index 0000000000..d7e69c9f60
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/IN/__init__.py
@@ -0,0 +1,33 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Class IN rdata type classes."""
+
+__all__ = [
+ 'A',
+ 'AAAA',
+ 'APL',
+ 'DHCID',
+ 'IPSECKEY',
+ 'KX',
+ 'NAPTR',
+ 'NSAP',
+ 'NSAP_PTR',
+ 'PX',
+ 'SRV',
+ 'WKS',
+]
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/__init__.py b/openpype/vendor/python/python_2/dns/rdtypes/__init__.py
new file mode 100644
index 0000000000..1ac137f1fe
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/__init__.py
@@ -0,0 +1,27 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS rdata type classes"""
+
+__all__ = [
+ 'ANY',
+ 'IN',
+ 'CH',
+ 'euibase',
+ 'mxbase',
+ 'nsbase',
+]
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/dnskeybase.py b/openpype/vendor/python/python_2/dns/rdtypes/dnskeybase.py
new file mode 100644
index 0000000000..3e7e87ef15
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/dnskeybase.py
@@ -0,0 +1,138 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2004-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import base64
+import struct
+
+import dns.exception
+import dns.dnssec
+import dns.rdata
+
+# wildcard import
+__all__ = ["SEP", "REVOKE", "ZONE",
+ "flags_to_text_set", "flags_from_text_set"]
+
+# flag constants
+SEP = 0x0001
+REVOKE = 0x0080
+ZONE = 0x0100
+
+_flag_by_text = {
+ 'SEP': SEP,
+ 'REVOKE': REVOKE,
+ 'ZONE': ZONE
+}
+
+# We construct the inverse mapping programmatically to ensure that we
+# cannot make any mistakes (e.g. omissions, cut-and-paste errors) that
+# would cause the mapping not to be true inverse.
+_flag_by_value = {y: x for x, y in _flag_by_text.items()}
+
+
+def flags_to_text_set(flags):
+ """Convert a DNSKEY flags value to set texts
+ @rtype: set([string])"""
+
+ flags_set = set()
+ mask = 0x1
+ while mask <= 0x8000:
+ if flags & mask:
+ text = _flag_by_value.get(mask)
+ if not text:
+ text = hex(mask)
+ flags_set.add(text)
+ mask <<= 1
+ return flags_set
+
+
+def flags_from_text_set(texts_set):
+ """Convert set of DNSKEY flag mnemonic texts to DNSKEY flag value
+ @rtype: int"""
+
+ flags = 0
+ for text in texts_set:
+ try:
+ flags += _flag_by_text[text]
+ except KeyError:
+ raise NotImplementedError(
+ "DNSKEY flag '%s' is not supported" % text)
+ return flags
+
+
+class DNSKEYBase(dns.rdata.Rdata):
+
+ """Base class for rdata that is like a DNSKEY record
+
+ @ivar flags: the key flags
+ @type flags: int
+ @ivar protocol: the protocol for which this key may be used
+ @type protocol: int
+ @ivar algorithm: the algorithm used for the key
+ @type algorithm: int
+ @ivar key: the public key
+ @type key: string"""
+
+ __slots__ = ['flags', 'protocol', 'algorithm', 'key']
+
+ def __init__(self, rdclass, rdtype, flags, protocol, algorithm, key):
+ super(DNSKEYBase, self).__init__(rdclass, rdtype)
+ self.flags = flags
+ self.protocol = protocol
+ self.algorithm = algorithm
+ self.key = key
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%d %d %d %s' % (self.flags, self.protocol, self.algorithm,
+ dns.rdata._base64ify(self.key))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ flags = tok.get_uint16()
+ protocol = tok.get_uint8()
+ algorithm = dns.dnssec.algorithm_from_text(tok.get_string())
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ b64 = b''.join(chunks)
+ key = base64.b64decode(b64)
+ return cls(rdclass, rdtype, flags, protocol, algorithm, key)
+
+ def to_wire(self, file, compress=None, origin=None):
+ header = struct.pack("!HBB", self.flags, self.protocol, self.algorithm)
+ file.write(header)
+ file.write(self.key)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ if rdlen < 4:
+ raise dns.exception.FormError
+ header = struct.unpack('!HBB', wire[current: current + 4])
+ current += 4
+ rdlen -= 4
+ key = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, header[0], header[1], header[2],
+ key)
+
+ def flags_to_text_set(self):
+ """Convert a DNSKEY flags value to set texts
+ @rtype: set([string])"""
+ return flags_to_text_set(self.flags)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/dsbase.py b/openpype/vendor/python/python_2/dns/rdtypes/dsbase.py
new file mode 100644
index 0000000000..26ae9d5c7d
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/dsbase.py
@@ -0,0 +1,85 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2010, 2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import struct
+import binascii
+
+import dns.rdata
+import dns.rdatatype
+
+
+class DSBase(dns.rdata.Rdata):
+
+ """Base class for rdata that is like a DS record
+
+ @ivar key_tag: the key tag
+ @type key_tag: int
+ @ivar algorithm: the algorithm
+ @type algorithm: int
+ @ivar digest_type: the digest type
+ @type digest_type: int
+ @ivar digest: the digest
+ @type digest: int
+ @see: draft-ietf-dnsext-delegation-signer-14.txt"""
+
+ __slots__ = ['key_tag', 'algorithm', 'digest_type', 'digest']
+
+ def __init__(self, rdclass, rdtype, key_tag, algorithm, digest_type,
+ digest):
+ super(DSBase, self).__init__(rdclass, rdtype)
+ self.key_tag = key_tag
+ self.algorithm = algorithm
+ self.digest_type = digest_type
+ self.digest = digest
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return '%d %d %d %s' % (self.key_tag, self.algorithm,
+ self.digest_type,
+ dns.rdata._hexify(self.digest,
+ chunksize=128))
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ key_tag = tok.get_uint16()
+ algorithm = tok.get_uint8()
+ digest_type = tok.get_uint8()
+ chunks = []
+ while 1:
+ t = tok.get().unescape()
+ if t.is_eol_or_eof():
+ break
+ if not t.is_identifier():
+ raise dns.exception.SyntaxError
+ chunks.append(t.value.encode())
+ digest = b''.join(chunks)
+ digest = binascii.unhexlify(digest)
+ return cls(rdclass, rdtype, key_tag, algorithm, digest_type,
+ digest)
+
+ def to_wire(self, file, compress=None, origin=None):
+ header = struct.pack("!HBB", self.key_tag, self.algorithm,
+ self.digest_type)
+ file.write(header)
+ file.write(self.digest)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ header = struct.unpack("!HBB", wire[current: current + 4])
+ current += 4
+ rdlen -= 4
+ digest = wire[current: current + rdlen].unwrap()
+ return cls(rdclass, rdtype, header[0], header[1], header[2], digest)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/euibase.py b/openpype/vendor/python/python_2/dns/rdtypes/euibase.py
new file mode 100644
index 0000000000..cc5fdaa63b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/euibase.py
@@ -0,0 +1,71 @@
+# Copyright (C) 2015 Red Hat, Inc.
+# Author: Petr Spacek
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED 'AS IS' AND RED HAT DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+import binascii
+
+import dns.rdata
+from dns._compat import xrange
+
+
+class EUIBase(dns.rdata.Rdata):
+
+ """EUIxx record
+
+ @ivar fingerprint: xx-bit Extended Unique Identifier (EUI-xx)
+ @type fingerprint: string
+ @see: rfc7043.txt"""
+
+ __slots__ = ['eui']
+ # define these in subclasses
+ # byte_len = 6 # 0123456789ab (in hex)
+ # text_len = byte_len * 3 - 1 # 01-23-45-67-89-ab
+
+ def __init__(self, rdclass, rdtype, eui):
+ super(EUIBase, self).__init__(rdclass, rdtype)
+ if len(eui) != self.byte_len:
+ raise dns.exception.FormError('EUI%s rdata has to have %s bytes'
+ % (self.byte_len * 8, self.byte_len))
+ self.eui = eui
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ return dns.rdata._hexify(self.eui, chunksize=2).replace(' ', '-')
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ text = tok.get_string()
+ tok.get_eol()
+ if len(text) != cls.text_len:
+ raise dns.exception.SyntaxError(
+ 'Input text must have %s characters' % cls.text_len)
+ expected_dash_idxs = xrange(2, cls.byte_len * 3 - 1, 3)
+ for i in expected_dash_idxs:
+ if text[i] != '-':
+ raise dns.exception.SyntaxError('Dash expected at position %s'
+ % i)
+ text = text.replace('-', '')
+ try:
+ data = binascii.unhexlify(text.encode())
+ except (ValueError, TypeError) as ex:
+ raise dns.exception.SyntaxError('Hex decoding error: %s' % str(ex))
+ return cls(rdclass, rdtype, data)
+
+ def to_wire(self, file, compress=None, origin=None):
+ file.write(self.eui)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ eui = wire[current:current + rdlen].unwrap()
+ return cls(rdclass, rdtype, eui)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/mxbase.py b/openpype/vendor/python/python_2/dns/rdtypes/mxbase.py
new file mode 100644
index 0000000000..9a3fa62360
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/mxbase.py
@@ -0,0 +1,103 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""MX-like base classes."""
+
+from io import BytesIO
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.name
+
+
+class MXBase(dns.rdata.Rdata):
+
+ """Base class for rdata that is like an MX record.
+
+ @ivar preference: the preference value
+ @type preference: int
+ @ivar exchange: the exchange name
+ @type exchange: dns.name.Name object"""
+
+ __slots__ = ['preference', 'exchange']
+
+ def __init__(self, rdclass, rdtype, preference, exchange):
+ super(MXBase, self).__init__(rdclass, rdtype)
+ self.preference = preference
+ self.exchange = exchange
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ exchange = self.exchange.choose_relativity(origin, relativize)
+ return '%d %s' % (self.preference, exchange)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ preference = tok.get_uint16()
+ exchange = tok.get_name()
+ exchange = exchange.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, preference, exchange)
+
+ def to_wire(self, file, compress=None, origin=None):
+ pref = struct.pack("!H", self.preference)
+ file.write(pref)
+ self.exchange.to_wire(file, compress, origin)
+
+ def to_digestable(self, origin=None):
+ return struct.pack("!H", self.preference) + \
+ self.exchange.to_digestable(origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (preference, ) = struct.unpack('!H', wire[current: current + 2])
+ current += 2
+ rdlen -= 2
+ (exchange, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ exchange = exchange.relativize(origin)
+ return cls(rdclass, rdtype, preference, exchange)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.exchange = self.exchange.choose_relativity(origin, relativize)
+
+
+class UncompressedMX(MXBase):
+
+ """Base class for rdata that is like an MX record, but whose name
+ is not compressed when converted to DNS wire format, and whose
+ digestable form is not downcased."""
+
+ def to_wire(self, file, compress=None, origin=None):
+ super(UncompressedMX, self).to_wire(file, None, origin)
+
+ def to_digestable(self, origin=None):
+ f = BytesIO()
+ self.to_wire(f, None, origin)
+ return f.getvalue()
+
+
+class UncompressedDowncasingMX(MXBase):
+
+ """Base class for rdata that is like an MX record, but whose name
+ is not compressed when convert to DNS wire format."""
+
+ def to_wire(self, file, compress=None, origin=None):
+ super(UncompressedDowncasingMX, self).to_wire(file, None, origin)
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/nsbase.py b/openpype/vendor/python/python_2/dns/rdtypes/nsbase.py
new file mode 100644
index 0000000000..97a2232638
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/nsbase.py
@@ -0,0 +1,83 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""NS-like base classes."""
+
+from io import BytesIO
+
+import dns.exception
+import dns.rdata
+import dns.name
+
+
+class NSBase(dns.rdata.Rdata):
+
+ """Base class for rdata that is like an NS record.
+
+ @ivar target: the target name of the rdata
+ @type target: dns.name.Name object"""
+
+ __slots__ = ['target']
+
+ def __init__(self, rdclass, rdtype, target):
+ super(NSBase, self).__init__(rdclass, rdtype)
+ self.target = target
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ target = self.target.choose_relativity(origin, relativize)
+ return str(target)
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ target = tok.get_name()
+ target = target.choose_relativity(origin, relativize)
+ tok.get_eol()
+ return cls(rdclass, rdtype, target)
+
+ def to_wire(self, file, compress=None, origin=None):
+ self.target.to_wire(file, compress, origin)
+
+ def to_digestable(self, origin=None):
+ return self.target.to_digestable(origin)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ (target, cused) = dns.name.from_wire(wire[: current + rdlen],
+ current)
+ if cused != rdlen:
+ raise dns.exception.FormError
+ if origin is not None:
+ target = target.relativize(origin)
+ return cls(rdclass, rdtype, target)
+
+ def choose_relativity(self, origin=None, relativize=True):
+ self.target = self.target.choose_relativity(origin, relativize)
+
+
+class UncompressedNS(NSBase):
+
+ """Base class for rdata that is like an NS record, but whose name
+ is not compressed when convert to DNS wire format, and whose
+ digestable form is not downcased."""
+
+ def to_wire(self, file, compress=None, origin=None):
+ super(UncompressedNS, self).to_wire(file, None, origin)
+
+ def to_digestable(self, origin=None):
+ f = BytesIO()
+ self.to_wire(f, None, origin)
+ return f.getvalue()
diff --git a/openpype/vendor/python/python_2/dns/rdtypes/txtbase.py b/openpype/vendor/python/python_2/dns/rdtypes/txtbase.py
new file mode 100644
index 0000000000..645a57ecfc
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rdtypes/txtbase.py
@@ -0,0 +1,97 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2006-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""TXT-like base class."""
+
+import struct
+
+import dns.exception
+import dns.rdata
+import dns.tokenizer
+from dns._compat import binary_type, string_types
+
+
+class TXTBase(dns.rdata.Rdata):
+
+ """Base class for rdata that is like a TXT record
+
+ @ivar strings: the strings
+ @type strings: list of binary
+ @see: RFC 1035"""
+
+ __slots__ = ['strings']
+
+ def __init__(self, rdclass, rdtype, strings):
+ super(TXTBase, self).__init__(rdclass, rdtype)
+ if isinstance(strings, binary_type) or \
+ isinstance(strings, string_types):
+ strings = [strings]
+ self.strings = []
+ for string in strings:
+ if isinstance(string, string_types):
+ string = string.encode()
+ self.strings.append(string)
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ txt = ''
+ prefix = ''
+ for s in self.strings:
+ txt += '{}"{}"'.format(prefix, dns.rdata._escapify(s))
+ prefix = ' '
+ return txt
+
+ @classmethod
+ def from_text(cls, rdclass, rdtype, tok, origin=None, relativize=True):
+ strings = []
+ while 1:
+ token = tok.get().unescape()
+ if token.is_eol_or_eof():
+ break
+ if not (token.is_quoted_string() or token.is_identifier()):
+ raise dns.exception.SyntaxError("expected a string")
+ if len(token.value) > 255:
+ raise dns.exception.SyntaxError("string too long")
+ value = token.value
+ if isinstance(value, binary_type):
+ strings.append(value)
+ else:
+ strings.append(value.encode())
+ if len(strings) == 0:
+ raise dns.exception.UnexpectedEnd
+ return cls(rdclass, rdtype, strings)
+
+ def to_wire(self, file, compress=None, origin=None):
+ for s in self.strings:
+ l = len(s)
+ assert l < 256
+ file.write(struct.pack('!B', l))
+ file.write(s)
+
+ @classmethod
+ def from_wire(cls, rdclass, rdtype, wire, current, rdlen, origin=None):
+ strings = []
+ while rdlen > 0:
+ l = wire[current]
+ current += 1
+ rdlen -= 1
+ if l > rdlen:
+ raise dns.exception.FormError
+ s = wire[current: current + l].unwrap()
+ current += l
+ rdlen -= l
+ strings.append(s)
+ return cls(rdclass, rdtype, strings)
diff --git a/openpype/vendor/python/python_2/dns/renderer.py b/openpype/vendor/python/python_2/dns/renderer.py
new file mode 100644
index 0000000000..d7ef8c7f09
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/renderer.py
@@ -0,0 +1,291 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Help for building DNS wire format messages"""
+
+from io import BytesIO
+import struct
+import random
+import time
+
+import dns.exception
+import dns.tsig
+from ._compat import long
+
+
+QUESTION = 0
+ANSWER = 1
+AUTHORITY = 2
+ADDITIONAL = 3
+
+
+class Renderer(object):
+ """Helper class for building DNS wire-format messages.
+
+ Most applications can use the higher-level L{dns.message.Message}
+ class and its to_wire() method to generate wire-format messages.
+ This class is for those applications which need finer control
+ over the generation of messages.
+
+ Typical use::
+
+ r = dns.renderer.Renderer(id=1, flags=0x80, max_size=512)
+ r.add_question(qname, qtype, qclass)
+ r.add_rrset(dns.renderer.ANSWER, rrset_1)
+ r.add_rrset(dns.renderer.ANSWER, rrset_2)
+ r.add_rrset(dns.renderer.AUTHORITY, ns_rrset)
+ r.add_edns(0, 0, 4096)
+ r.add_rrset(dns.renderer.ADDTIONAL, ad_rrset_1)
+ r.add_rrset(dns.renderer.ADDTIONAL, ad_rrset_2)
+ r.write_header()
+ r.add_tsig(keyname, secret, 300, 1, 0, '', request_mac)
+ wire = r.get_wire()
+
+ output, a BytesIO, where rendering is written
+
+ id: the message id
+
+ flags: the message flags
+
+ max_size: the maximum size of the message
+
+ origin: the origin to use when rendering relative names
+
+ compress: the compression table
+
+ section: an int, the section currently being rendered
+
+ counts: list of the number of RRs in each section
+
+ mac: the MAC of the rendered message (if TSIG was used)
+ """
+
+ def __init__(self, id=None, flags=0, max_size=65535, origin=None):
+ """Initialize a new renderer."""
+
+ self.output = BytesIO()
+ if id is None:
+ self.id = random.randint(0, 65535)
+ else:
+ self.id = id
+ self.flags = flags
+ self.max_size = max_size
+ self.origin = origin
+ self.compress = {}
+ self.section = QUESTION
+ self.counts = [0, 0, 0, 0]
+ self.output.write(b'\x00' * 12)
+ self.mac = ''
+
+ def _rollback(self, where):
+ """Truncate the output buffer at offset *where*, and remove any
+ compression table entries that pointed beyond the truncation
+ point.
+ """
+
+ self.output.seek(where)
+ self.output.truncate()
+ keys_to_delete = []
+ for k, v in self.compress.items():
+ if v >= where:
+ keys_to_delete.append(k)
+ for k in keys_to_delete:
+ del self.compress[k]
+
+ def _set_section(self, section):
+ """Set the renderer's current section.
+
+ Sections must be rendered order: QUESTION, ANSWER, AUTHORITY,
+ ADDITIONAL. Sections may be empty.
+
+ Raises dns.exception.FormError if an attempt was made to set
+ a section value less than the current section.
+ """
+
+ if self.section != section:
+ if self.section > section:
+ raise dns.exception.FormError
+ self.section = section
+
+ def add_question(self, qname, rdtype, rdclass=dns.rdataclass.IN):
+ """Add a question to the message."""
+
+ self._set_section(QUESTION)
+ before = self.output.tell()
+ qname.to_wire(self.output, self.compress, self.origin)
+ self.output.write(struct.pack("!HH", rdtype, rdclass))
+ after = self.output.tell()
+ if after >= self.max_size:
+ self._rollback(before)
+ raise dns.exception.TooBig
+ self.counts[QUESTION] += 1
+
+ def add_rrset(self, section, rrset, **kw):
+ """Add the rrset to the specified section.
+
+ Any keyword arguments are passed on to the rdataset's to_wire()
+ routine.
+ """
+
+ self._set_section(section)
+ before = self.output.tell()
+ n = rrset.to_wire(self.output, self.compress, self.origin, **kw)
+ after = self.output.tell()
+ if after >= self.max_size:
+ self._rollback(before)
+ raise dns.exception.TooBig
+ self.counts[section] += n
+
+ def add_rdataset(self, section, name, rdataset, **kw):
+ """Add the rdataset to the specified section, using the specified
+ name as the owner name.
+
+ Any keyword arguments are passed on to the rdataset's to_wire()
+ routine.
+ """
+
+ self._set_section(section)
+ before = self.output.tell()
+ n = rdataset.to_wire(name, self.output, self.compress, self.origin,
+ **kw)
+ after = self.output.tell()
+ if after >= self.max_size:
+ self._rollback(before)
+ raise dns.exception.TooBig
+ self.counts[section] += n
+
+ def add_edns(self, edns, ednsflags, payload, options=None):
+ """Add an EDNS OPT record to the message."""
+
+ # make sure the EDNS version in ednsflags agrees with edns
+ ednsflags &= long(0xFF00FFFF)
+ ednsflags |= (edns << 16)
+ self._set_section(ADDITIONAL)
+ before = self.output.tell()
+ self.output.write(struct.pack('!BHHIH', 0, dns.rdatatype.OPT, payload,
+ ednsflags, 0))
+ if options is not None:
+ lstart = self.output.tell()
+ for opt in options:
+ stuff = struct.pack("!HH", opt.otype, 0)
+ self.output.write(stuff)
+ start = self.output.tell()
+ opt.to_wire(self.output)
+ end = self.output.tell()
+ assert end - start < 65536
+ self.output.seek(start - 2)
+ stuff = struct.pack("!H", end - start)
+ self.output.write(stuff)
+ self.output.seek(0, 2)
+ lend = self.output.tell()
+ assert lend - lstart < 65536
+ self.output.seek(lstart - 2)
+ stuff = struct.pack("!H", lend - lstart)
+ self.output.write(stuff)
+ self.output.seek(0, 2)
+ after = self.output.tell()
+ if after >= self.max_size:
+ self._rollback(before)
+ raise dns.exception.TooBig
+ self.counts[ADDITIONAL] += 1
+
+ def add_tsig(self, keyname, secret, fudge, id, tsig_error, other_data,
+ request_mac, algorithm=dns.tsig.default_algorithm):
+ """Add a TSIG signature to the message."""
+
+ s = self.output.getvalue()
+ (tsig_rdata, self.mac, ctx) = dns.tsig.sign(s,
+ keyname,
+ secret,
+ int(time.time()),
+ fudge,
+ id,
+ tsig_error,
+ other_data,
+ request_mac,
+ algorithm=algorithm)
+ self._write_tsig(tsig_rdata, keyname)
+
+ def add_multi_tsig(self, ctx, keyname, secret, fudge, id, tsig_error,
+ other_data, request_mac,
+ algorithm=dns.tsig.default_algorithm):
+ """Add a TSIG signature to the message. Unlike add_tsig(), this can be
+ used for a series of consecutive DNS envelopes, e.g. for a zone
+ transfer over TCP [RFC2845, 4.4].
+
+ For the first message in the sequence, give ctx=None. For each
+ subsequent message, give the ctx that was returned from the
+ add_multi_tsig() call for the previous message."""
+
+ s = self.output.getvalue()
+ (tsig_rdata, self.mac, ctx) = dns.tsig.sign(s,
+ keyname,
+ secret,
+ int(time.time()),
+ fudge,
+ id,
+ tsig_error,
+ other_data,
+ request_mac,
+ ctx=ctx,
+ first=ctx is None,
+ multi=True,
+ algorithm=algorithm)
+ self._write_tsig(tsig_rdata, keyname)
+ return ctx
+
+ def _write_tsig(self, tsig_rdata, keyname):
+ self._set_section(ADDITIONAL)
+ before = self.output.tell()
+
+ keyname.to_wire(self.output, self.compress, self.origin)
+ self.output.write(struct.pack('!HHIH', dns.rdatatype.TSIG,
+ dns.rdataclass.ANY, 0, 0))
+ rdata_start = self.output.tell()
+ self.output.write(tsig_rdata)
+
+ after = self.output.tell()
+ assert after - rdata_start < 65536
+ if after >= self.max_size:
+ self._rollback(before)
+ raise dns.exception.TooBig
+
+ self.output.seek(rdata_start - 2)
+ self.output.write(struct.pack('!H', after - rdata_start))
+ self.counts[ADDITIONAL] += 1
+ self.output.seek(10)
+ self.output.write(struct.pack('!H', self.counts[ADDITIONAL]))
+ self.output.seek(0, 2)
+
+ def write_header(self):
+ """Write the DNS message header.
+
+ Writing the DNS message header is done after all sections
+ have been rendered, but before the optional TSIG signature
+ is added.
+ """
+
+ self.output.seek(0)
+ self.output.write(struct.pack('!HHHHHH', self.id, self.flags,
+ self.counts[0], self.counts[1],
+ self.counts[2], self.counts[3]))
+ self.output.seek(0, 2)
+
+ def get_wire(self):
+ """Return the wire format message."""
+
+ return self.output.getvalue()
diff --git a/openpype/vendor/python/python_2/dns/resolver.py b/openpype/vendor/python/python_2/dns/resolver.py
new file mode 100644
index 0000000000..806e5b2b45
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/resolver.py
@@ -0,0 +1,1383 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS stub resolver."""
+
+import socket
+import sys
+import time
+import random
+
+try:
+ import threading as _threading
+except ImportError:
+ import dummy_threading as _threading
+
+import dns.exception
+import dns.flags
+import dns.ipv4
+import dns.ipv6
+import dns.message
+import dns.name
+import dns.query
+import dns.rcode
+import dns.rdataclass
+import dns.rdatatype
+import dns.reversename
+import dns.tsig
+from ._compat import xrange, string_types
+
+if sys.platform == 'win32':
+ try:
+ import winreg as _winreg
+ except ImportError:
+ import _winreg # pylint: disable=import-error
+
+class NXDOMAIN(dns.exception.DNSException):
+ """The DNS query name does not exist."""
+ supp_kwargs = {'qnames', 'responses'}
+ fmt = None # we have our own __str__ implementation
+
+ def _check_kwargs(self, qnames, responses=None):
+ if not isinstance(qnames, (list, tuple, set)):
+ raise AttributeError("qnames must be a list, tuple or set")
+ if len(qnames) == 0:
+ raise AttributeError("qnames must contain at least one element")
+ if responses is None:
+ responses = {}
+ elif not isinstance(responses, dict):
+ raise AttributeError("responses must be a dict(qname=response)")
+ kwargs = dict(qnames=qnames, responses=responses)
+ return kwargs
+
+ def __str__(self):
+ if 'qnames' not in self.kwargs:
+ return super(NXDOMAIN, self).__str__()
+ qnames = self.kwargs['qnames']
+ if len(qnames) > 1:
+ msg = 'None of DNS query names exist'
+ else:
+ msg = 'The DNS query name does not exist'
+ qnames = ', '.join(map(str, qnames))
+ return "{}: {}".format(msg, qnames)
+
+ def canonical_name(self):
+ if not 'qnames' in self.kwargs:
+ raise TypeError("parametrized exception required")
+ IN = dns.rdataclass.IN
+ CNAME = dns.rdatatype.CNAME
+ cname = None
+ for qname in self.kwargs['qnames']:
+ response = self.kwargs['responses'][qname]
+ for answer in response.answer:
+ if answer.rdtype != CNAME or answer.rdclass != IN:
+ continue
+ cname = answer.items[0].target.to_text()
+ if cname is not None:
+ return dns.name.from_text(cname)
+ return self.kwargs['qnames'][0]
+ canonical_name = property(canonical_name, doc=(
+ "Return the unresolved canonical name."))
+
+ def __add__(self, e_nx):
+ """Augment by results from another NXDOMAIN exception."""
+ qnames0 = list(self.kwargs.get('qnames', []))
+ responses0 = dict(self.kwargs.get('responses', {}))
+ responses1 = e_nx.kwargs.get('responses', {})
+ for qname1 in e_nx.kwargs.get('qnames', []):
+ if qname1 not in qnames0:
+ qnames0.append(qname1)
+ if qname1 in responses1:
+ responses0[qname1] = responses1[qname1]
+ return NXDOMAIN(qnames=qnames0, responses=responses0)
+
+ def qnames(self):
+ """All of the names that were tried.
+
+ Returns a list of ``dns.name.Name``.
+ """
+ return self.kwargs['qnames']
+
+ def responses(self):
+ """A map from queried names to their NXDOMAIN responses.
+
+ Returns a dict mapping a ``dns.name.Name`` to a
+ ``dns.message.Message``.
+ """
+ return self.kwargs['responses']
+
+ def response(self, qname):
+ """The response for query *qname*.
+
+ Returns a ``dns.message.Message``.
+ """
+ return self.kwargs['responses'][qname]
+
+
+class YXDOMAIN(dns.exception.DNSException):
+ """The DNS query name is too long after DNAME substitution."""
+
+# The definition of the Timeout exception has moved from here to the
+# dns.exception module. We keep dns.resolver.Timeout defined for
+# backwards compatibility.
+
+Timeout = dns.exception.Timeout
+
+
+class NoAnswer(dns.exception.DNSException):
+ """The DNS response does not contain an answer to the question."""
+ fmt = 'The DNS response does not contain an answer ' + \
+ 'to the question: {query}'
+ supp_kwargs = {'response'}
+
+ def _fmt_kwargs(self, **kwargs):
+ return super(NoAnswer, self)._fmt_kwargs(
+ query=kwargs['response'].question)
+
+
+class NoNameservers(dns.exception.DNSException):
+ """All nameservers failed to answer the query.
+
+ errors: list of servers and respective errors
+ The type of errors is
+ [(server IP address, any object convertible to string)].
+ Non-empty errors list will add explanatory message ()
+ """
+
+ msg = "All nameservers failed to answer the query."
+ fmt = "%s {query}: {errors}" % msg[:-1]
+ supp_kwargs = {'request', 'errors'}
+
+ def _fmt_kwargs(self, **kwargs):
+ srv_msgs = []
+ for err in kwargs['errors']:
+ srv_msgs.append('Server {} {} port {} answered {}'.format(err[0],
+ 'TCP' if err[1] else 'UDP', err[2], err[3]))
+ return super(NoNameservers, self)._fmt_kwargs(
+ query=kwargs['request'].question, errors='; '.join(srv_msgs))
+
+
+class NotAbsolute(dns.exception.DNSException):
+ """An absolute domain name is required but a relative name was provided."""
+
+
+class NoRootSOA(dns.exception.DNSException):
+ """There is no SOA RR at the DNS root name. This should never happen!"""
+
+
+class NoMetaqueries(dns.exception.DNSException):
+ """DNS metaqueries are not allowed."""
+
+
+class Answer(object):
+ """DNS stub resolver answer.
+
+ Instances of this class bundle up the result of a successful DNS
+ resolution.
+
+ For convenience, the answer object implements much of the sequence
+ protocol, forwarding to its ``rrset`` attribute. E.g.
+ ``for a in answer`` is equivalent to ``for a in answer.rrset``.
+ ``answer[i]`` is equivalent to ``answer.rrset[i]``, and
+ ``answer[i:j]`` is equivalent to ``answer.rrset[i:j]``.
+
+ Note that CNAMEs or DNAMEs in the response may mean that answer
+ RRset's name might not be the query name.
+ """
+
+ def __init__(self, qname, rdtype, rdclass, response,
+ raise_on_no_answer=True):
+ self.qname = qname
+ self.rdtype = rdtype
+ self.rdclass = rdclass
+ self.response = response
+ min_ttl = -1
+ rrset = None
+ for count in xrange(0, 15):
+ try:
+ rrset = response.find_rrset(response.answer, qname,
+ rdclass, rdtype)
+ if min_ttl == -1 or rrset.ttl < min_ttl:
+ min_ttl = rrset.ttl
+ break
+ except KeyError:
+ if rdtype != dns.rdatatype.CNAME:
+ try:
+ crrset = response.find_rrset(response.answer,
+ qname,
+ rdclass,
+ dns.rdatatype.CNAME)
+ if min_ttl == -1 or crrset.ttl < min_ttl:
+ min_ttl = crrset.ttl
+ for rd in crrset:
+ qname = rd.target
+ break
+ continue
+ except KeyError:
+ if raise_on_no_answer:
+ raise NoAnswer(response=response)
+ if raise_on_no_answer:
+ raise NoAnswer(response=response)
+ if rrset is None and raise_on_no_answer:
+ raise NoAnswer(response=response)
+ self.canonical_name = qname
+ self.rrset = rrset
+ if rrset is None:
+ while 1:
+ # Look for a SOA RR whose owner name is a superdomain
+ # of qname.
+ try:
+ srrset = response.find_rrset(response.authority, qname,
+ rdclass, dns.rdatatype.SOA)
+ if min_ttl == -1 or srrset.ttl < min_ttl:
+ min_ttl = srrset.ttl
+ if srrset[0].minimum < min_ttl:
+ min_ttl = srrset[0].minimum
+ break
+ except KeyError:
+ try:
+ qname = qname.parent()
+ except dns.name.NoParent:
+ break
+ self.expiration = time.time() + min_ttl
+
+ def __getattr__(self, attr):
+ if attr == 'name':
+ return self.rrset.name
+ elif attr == 'ttl':
+ return self.rrset.ttl
+ elif attr == 'covers':
+ return self.rrset.covers
+ elif attr == 'rdclass':
+ return self.rrset.rdclass
+ elif attr == 'rdtype':
+ return self.rrset.rdtype
+ else:
+ raise AttributeError(attr)
+
+ def __len__(self):
+ return self.rrset and len(self.rrset) or 0
+
+ def __iter__(self):
+ return self.rrset and iter(self.rrset) or iter(tuple())
+
+ def __getitem__(self, i):
+ if self.rrset is None:
+ raise IndexError
+ return self.rrset[i]
+
+ def __delitem__(self, i):
+ if self.rrset is None:
+ raise IndexError
+ del self.rrset[i]
+
+
+class Cache(object):
+ """Simple thread-safe DNS answer cache."""
+
+ def __init__(self, cleaning_interval=300.0):
+ """*cleaning_interval*, a ``float`` is the number of seconds between
+ periodic cleanings.
+ """
+
+ self.data = {}
+ self.cleaning_interval = cleaning_interval
+ self.next_cleaning = time.time() + self.cleaning_interval
+ self.lock = _threading.Lock()
+
+ def _maybe_clean(self):
+ """Clean the cache if it's time to do so."""
+
+ now = time.time()
+ if self.next_cleaning <= now:
+ keys_to_delete = []
+ for (k, v) in self.data.items():
+ if v.expiration <= now:
+ keys_to_delete.append(k)
+ for k in keys_to_delete:
+ del self.data[k]
+ now = time.time()
+ self.next_cleaning = now + self.cleaning_interval
+
+ def get(self, key):
+ """Get the answer associated with *key*.
+
+ Returns None if no answer is cached for the key.
+
+ *key*, a ``(dns.name.Name, int, int)`` tuple whose values are the
+ query name, rdtype, and rdclass respectively.
+
+ Returns a ``dns.resolver.Answer`` or ``None``.
+ """
+
+ try:
+ self.lock.acquire()
+ self._maybe_clean()
+ v = self.data.get(key)
+ if v is None or v.expiration <= time.time():
+ return None
+ return v
+ finally:
+ self.lock.release()
+
+ def put(self, key, value):
+ """Associate key and value in the cache.
+
+ *key*, a ``(dns.name.Name, int, int)`` tuple whose values are the
+ query name, rdtype, and rdclass respectively.
+
+ *value*, a ``dns.resolver.Answer``, the answer.
+ """
+
+ try:
+ self.lock.acquire()
+ self._maybe_clean()
+ self.data[key] = value
+ finally:
+ self.lock.release()
+
+ def flush(self, key=None):
+ """Flush the cache.
+
+ If *key* is not ``None``, only that item is flushed. Otherwise
+ the entire cache is flushed.
+
+ *key*, a ``(dns.name.Name, int, int)`` tuple whose values are the
+ query name, rdtype, and rdclass respectively.
+ """
+
+ try:
+ self.lock.acquire()
+ if key is not None:
+ if key in self.data:
+ del self.data[key]
+ else:
+ self.data = {}
+ self.next_cleaning = time.time() + self.cleaning_interval
+ finally:
+ self.lock.release()
+
+
+class LRUCacheNode(object):
+ """LRUCache node."""
+
+ def __init__(self, key, value):
+ self.key = key
+ self.value = value
+ self.prev = self
+ self.next = self
+
+ def link_before(self, node):
+ self.prev = node.prev
+ self.next = node
+ node.prev.next = self
+ node.prev = self
+
+ def link_after(self, node):
+ self.prev = node
+ self.next = node.next
+ node.next.prev = self
+ node.next = self
+
+ def unlink(self):
+ self.next.prev = self.prev
+ self.prev.next = self.next
+
+
+class LRUCache(object):
+ """Thread-safe, bounded, least-recently-used DNS answer cache.
+
+ This cache is better than the simple cache (above) if you're
+ running a web crawler or other process that does a lot of
+ resolutions. The LRUCache has a maximum number of nodes, and when
+ it is full, the least-recently used node is removed to make space
+ for a new one.
+ """
+
+ def __init__(self, max_size=100000):
+ """*max_size*, an ``int``, is the maximum number of nodes to cache;
+ it must be greater than 0.
+ """
+
+ self.data = {}
+ self.set_max_size(max_size)
+ self.sentinel = LRUCacheNode(None, None)
+ self.lock = _threading.Lock()
+
+ def set_max_size(self, max_size):
+ if max_size < 1:
+ max_size = 1
+ self.max_size = max_size
+
+ def get(self, key):
+ """Get the answer associated with *key*.
+
+ Returns None if no answer is cached for the key.
+
+ *key*, a ``(dns.name.Name, int, int)`` tuple whose values are the
+ query name, rdtype, and rdclass respectively.
+
+ Returns a ``dns.resolver.Answer`` or ``None``.
+ """
+
+ try:
+ self.lock.acquire()
+ node = self.data.get(key)
+ if node is None:
+ return None
+ # Unlink because we're either going to move the node to the front
+ # of the LRU list or we're going to free it.
+ node.unlink()
+ if node.value.expiration <= time.time():
+ del self.data[node.key]
+ return None
+ node.link_after(self.sentinel)
+ return node.value
+ finally:
+ self.lock.release()
+
+ def put(self, key, value):
+ """Associate key and value in the cache.
+
+ *key*, a ``(dns.name.Name, int, int)`` tuple whose values are the
+ query name, rdtype, and rdclass respectively.
+
+ *value*, a ``dns.resolver.Answer``, the answer.
+ """
+
+ try:
+ self.lock.acquire()
+ node = self.data.get(key)
+ if node is not None:
+ node.unlink()
+ del self.data[node.key]
+ while len(self.data) >= self.max_size:
+ node = self.sentinel.prev
+ node.unlink()
+ del self.data[node.key]
+ node = LRUCacheNode(key, value)
+ node.link_after(self.sentinel)
+ self.data[key] = node
+ finally:
+ self.lock.release()
+
+ def flush(self, key=None):
+ """Flush the cache.
+
+ If *key* is not ``None``, only that item is flushed. Otherwise
+ the entire cache is flushed.
+
+ *key*, a ``(dns.name.Name, int, int)`` tuple whose values are the
+ query name, rdtype, and rdclass respectively.
+ """
+
+ try:
+ self.lock.acquire()
+ if key is not None:
+ node = self.data.get(key)
+ if node is not None:
+ node.unlink()
+ del self.data[node.key]
+ else:
+ node = self.sentinel.next
+ while node != self.sentinel:
+ next = node.next
+ node.prev = None
+ node.next = None
+ node = next
+ self.data = {}
+ finally:
+ self.lock.release()
+
+
+class Resolver(object):
+ """DNS stub resolver."""
+
+ def __init__(self, filename='/etc/resolv.conf', configure=True):
+ """*filename*, a ``text`` or file object, specifying a file
+ in standard /etc/resolv.conf format. This parameter is meaningful
+ only when *configure* is true and the platform is POSIX.
+
+ *configure*, a ``bool``. If True (the default), the resolver
+ instance is configured in the normal fashion for the operating
+ system the resolver is running on. (I.e. by reading a
+ /etc/resolv.conf file on POSIX systems and from the registry
+ on Windows systems.)
+ """
+
+ self.domain = None
+ self.nameservers = None
+ self.nameserver_ports = None
+ self.port = None
+ self.search = None
+ self.timeout = None
+ self.lifetime = None
+ self.keyring = None
+ self.keyname = None
+ self.keyalgorithm = None
+ self.edns = None
+ self.ednsflags = None
+ self.payload = None
+ self.cache = None
+ self.flags = None
+ self.retry_servfail = False
+ self.rotate = False
+
+ self.reset()
+ if configure:
+ if sys.platform == 'win32':
+ self.read_registry()
+ elif filename:
+ self.read_resolv_conf(filename)
+
+ def reset(self):
+ """Reset all resolver configuration to the defaults."""
+
+ self.domain = \
+ dns.name.Name(dns.name.from_text(socket.gethostname())[1:])
+ if len(self.domain) == 0:
+ self.domain = dns.name.root
+ self.nameservers = []
+ self.nameserver_ports = {}
+ self.port = 53
+ self.search = []
+ self.timeout = 2.0
+ self.lifetime = 30.0
+ self.keyring = None
+ self.keyname = None
+ self.keyalgorithm = dns.tsig.default_algorithm
+ self.edns = -1
+ self.ednsflags = 0
+ self.payload = 0
+ self.cache = None
+ self.flags = None
+ self.retry_servfail = False
+ self.rotate = False
+
+ def read_resolv_conf(self, f):
+ """Process *f* as a file in the /etc/resolv.conf format. If f is
+ a ``text``, it is used as the name of the file to open; otherwise it
+ is treated as the file itself."""
+
+ if isinstance(f, string_types):
+ try:
+ f = open(f, 'r')
+ except IOError:
+ # /etc/resolv.conf doesn't exist, can't be read, etc.
+ # We'll just use the default resolver configuration.
+ self.nameservers = ['127.0.0.1']
+ return
+ want_close = True
+ else:
+ want_close = False
+ try:
+ for l in f:
+ if len(l) == 0 or l[0] == '#' or l[0] == ';':
+ continue
+ tokens = l.split()
+
+ # Any line containing less than 2 tokens is malformed
+ if len(tokens) < 2:
+ continue
+
+ if tokens[0] == 'nameserver':
+ self.nameservers.append(tokens[1])
+ elif tokens[0] == 'domain':
+ self.domain = dns.name.from_text(tokens[1])
+ elif tokens[0] == 'search':
+ for suffix in tokens[1:]:
+ self.search.append(dns.name.from_text(suffix))
+ elif tokens[0] == 'options':
+ if 'rotate' in tokens[1:]:
+ self.rotate = True
+ finally:
+ if want_close:
+ f.close()
+ if len(self.nameservers) == 0:
+ self.nameservers.append('127.0.0.1')
+
+ def _determine_split_char(self, entry):
+ #
+ # The windows registry irritatingly changes the list element
+ # delimiter in between ' ' and ',' (and vice-versa) in various
+ # versions of windows.
+ #
+ if entry.find(' ') >= 0:
+ split_char = ' '
+ elif entry.find(',') >= 0:
+ split_char = ','
+ else:
+ # probably a singleton; treat as a space-separated list.
+ split_char = ' '
+ return split_char
+
+ def _config_win32_nameservers(self, nameservers):
+ # we call str() on nameservers to convert it from unicode to ascii
+ nameservers = str(nameservers)
+ split_char = self._determine_split_char(nameservers)
+ ns_list = nameservers.split(split_char)
+ for ns in ns_list:
+ if ns not in self.nameservers:
+ self.nameservers.append(ns)
+
+ def _config_win32_domain(self, domain):
+ # we call str() on domain to convert it from unicode to ascii
+ self.domain = dns.name.from_text(str(domain))
+
+ def _config_win32_search(self, search):
+ # we call str() on search to convert it from unicode to ascii
+ search = str(search)
+ split_char = self._determine_split_char(search)
+ search_list = search.split(split_char)
+ for s in search_list:
+ if s not in self.search:
+ self.search.append(dns.name.from_text(s))
+
+ def _config_win32_fromkey(self, key, always_try_domain):
+ try:
+ servers, rtype = _winreg.QueryValueEx(key, 'NameServer')
+ except WindowsError: # pylint: disable=undefined-variable
+ servers = None
+ if servers:
+ self._config_win32_nameservers(servers)
+ if servers or always_try_domain:
+ try:
+ dom, rtype = _winreg.QueryValueEx(key, 'Domain')
+ if dom:
+ self._config_win32_domain(dom)
+ except WindowsError: # pylint: disable=undefined-variable
+ pass
+ else:
+ try:
+ servers, rtype = _winreg.QueryValueEx(key, 'DhcpNameServer')
+ except WindowsError: # pylint: disable=undefined-variable
+ servers = None
+ if servers:
+ self._config_win32_nameservers(servers)
+ try:
+ dom, rtype = _winreg.QueryValueEx(key, 'DhcpDomain')
+ if dom:
+ self._config_win32_domain(dom)
+ except WindowsError: # pylint: disable=undefined-variable
+ pass
+ try:
+ search, rtype = _winreg.QueryValueEx(key, 'SearchList')
+ except WindowsError: # pylint: disable=undefined-variable
+ search = None
+ if search:
+ self._config_win32_search(search)
+
+ def read_registry(self):
+ """Extract resolver configuration from the Windows registry."""
+
+ lm = _winreg.ConnectRegistry(None, _winreg.HKEY_LOCAL_MACHINE)
+ want_scan = False
+ try:
+ try:
+ # XP, 2000
+ tcp_params = _winreg.OpenKey(lm,
+ r'SYSTEM\CurrentControlSet'
+ r'\Services\Tcpip\Parameters')
+ want_scan = True
+ except EnvironmentError:
+ # ME
+ tcp_params = _winreg.OpenKey(lm,
+ r'SYSTEM\CurrentControlSet'
+ r'\Services\VxD\MSTCP')
+ try:
+ self._config_win32_fromkey(tcp_params, True)
+ finally:
+ tcp_params.Close()
+ if want_scan:
+ interfaces = _winreg.OpenKey(lm,
+ r'SYSTEM\CurrentControlSet'
+ r'\Services\Tcpip\Parameters'
+ r'\Interfaces')
+ try:
+ i = 0
+ while True:
+ try:
+ guid = _winreg.EnumKey(interfaces, i)
+ i += 1
+ key = _winreg.OpenKey(interfaces, guid)
+ if not self._win32_is_nic_enabled(lm, guid, key):
+ continue
+ try:
+ self._config_win32_fromkey(key, False)
+ finally:
+ key.Close()
+ except EnvironmentError:
+ break
+ finally:
+ interfaces.Close()
+ finally:
+ lm.Close()
+
+ def _win32_is_nic_enabled(self, lm, guid, interface_key):
+ # Look in the Windows Registry to determine whether the network
+ # interface corresponding to the given guid is enabled.
+ #
+ # (Code contributed by Paul Marks, thanks!)
+ #
+ try:
+ # This hard-coded location seems to be consistent, at least
+ # from Windows 2000 through Vista.
+ connection_key = _winreg.OpenKey(
+ lm,
+ r'SYSTEM\CurrentControlSet\Control\Network'
+ r'\{4D36E972-E325-11CE-BFC1-08002BE10318}'
+ r'\%s\Connection' % guid)
+
+ try:
+ # The PnpInstanceID points to a key inside Enum
+ (pnp_id, ttype) = _winreg.QueryValueEx(
+ connection_key, 'PnpInstanceID')
+
+ if ttype != _winreg.REG_SZ:
+ raise ValueError
+
+ device_key = _winreg.OpenKey(
+ lm, r'SYSTEM\CurrentControlSet\Enum\%s' % pnp_id)
+
+ try:
+ # Get ConfigFlags for this device
+ (flags, ttype) = _winreg.QueryValueEx(
+ device_key, 'ConfigFlags')
+
+ if ttype != _winreg.REG_DWORD:
+ raise ValueError
+
+ # Based on experimentation, bit 0x1 indicates that the
+ # device is disabled.
+ return not flags & 0x1
+
+ finally:
+ device_key.Close()
+ finally:
+ connection_key.Close()
+ except (EnvironmentError, ValueError):
+ # Pre-vista, enabled interfaces seem to have a non-empty
+ # NTEContextList; this was how dnspython detected enabled
+ # nics before the code above was contributed. We've retained
+ # the old method since we don't know if the code above works
+ # on Windows 95/98/ME.
+ try:
+ (nte, ttype) = _winreg.QueryValueEx(interface_key,
+ 'NTEContextList')
+ return nte is not None
+ except WindowsError: # pylint: disable=undefined-variable
+ return False
+
+ def _compute_timeout(self, start, lifetime=None):
+ lifetime = self.lifetime if lifetime is None else lifetime
+ now = time.time()
+ duration = now - start
+ if duration < 0:
+ if duration < -1:
+ # Time going backwards is bad. Just give up.
+ raise Timeout(timeout=duration)
+ else:
+ # Time went backwards, but only a little. This can
+ # happen, e.g. under vmware with older linux kernels.
+ # Pretend it didn't happen.
+ now = start
+ if duration >= lifetime:
+ raise Timeout(timeout=duration)
+ return min(lifetime - duration, self.timeout)
+
+ def query(self, qname, rdtype=dns.rdatatype.A, rdclass=dns.rdataclass.IN,
+ tcp=False, source=None, raise_on_no_answer=True, source_port=0,
+ lifetime=None):
+ """Query nameservers to find the answer to the question.
+
+ The *qname*, *rdtype*, and *rdclass* parameters may be objects
+ of the appropriate type, or strings that can be converted into objects
+ of the appropriate type.
+
+ *qname*, a ``dns.name.Name`` or ``text``, the query name.
+
+ *rdtype*, an ``int`` or ``text``, the query type.
+
+ *rdclass*, an ``int`` or ``text``, the query class.
+
+ *tcp*, a ``bool``. If ``True``, use TCP to make the query.
+
+ *source*, a ``text`` or ``None``. If not ``None``, bind to this IP
+ address when making queries.
+
+ *raise_on_no_answer*, a ``bool``. If ``True``, raise
+ ``dns.resolver.NoAnswer`` if there's no answer to the question.
+
+ *source_port*, an ``int``, the port from which to send the message.
+
+ *lifetime*, a ``float``, how long query should run before timing out.
+
+ Raises ``dns.exception.Timeout`` if no answers could be found
+ in the specified lifetime.
+
+ Raises ``dns.resolver.NXDOMAIN`` if the query name does not exist.
+
+ Raises ``dns.resolver.YXDOMAIN`` if the query name is too long after
+ DNAME substitution.
+
+ Raises ``dns.resolver.NoAnswer`` if *raise_on_no_answer* is
+ ``True`` and the query name exists but has no RRset of the
+ desired type and class.
+
+ Raises ``dns.resolver.NoNameservers`` if no non-broken
+ nameservers are available to answer the question.
+
+ Returns a ``dns.resolver.Answer`` instance.
+ """
+
+ if isinstance(qname, string_types):
+ qname = dns.name.from_text(qname, None)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if dns.rdatatype.is_metatype(rdtype):
+ raise NoMetaqueries
+ if isinstance(rdclass, string_types):
+ rdclass = dns.rdataclass.from_text(rdclass)
+ if dns.rdataclass.is_metaclass(rdclass):
+ raise NoMetaqueries
+ qnames_to_try = []
+ if qname.is_absolute():
+ qnames_to_try.append(qname)
+ else:
+ if len(qname) > 1:
+ qnames_to_try.append(qname.concatenate(dns.name.root))
+ if self.search:
+ for suffix in self.search:
+ qnames_to_try.append(qname.concatenate(suffix))
+ else:
+ qnames_to_try.append(qname.concatenate(self.domain))
+ all_nxdomain = True
+ nxdomain_responses = {}
+ start = time.time()
+ _qname = None # make pylint happy
+ for _qname in qnames_to_try:
+ if self.cache:
+ answer = self.cache.get((_qname, rdtype, rdclass))
+ if answer is not None:
+ if answer.rrset is None and raise_on_no_answer:
+ raise NoAnswer(response=answer.response)
+ else:
+ return answer
+ request = dns.message.make_query(_qname, rdtype, rdclass)
+ if self.keyname is not None:
+ request.use_tsig(self.keyring, self.keyname,
+ algorithm=self.keyalgorithm)
+ request.use_edns(self.edns, self.ednsflags, self.payload)
+ if self.flags is not None:
+ request.flags = self.flags
+ response = None
+ #
+ # make a copy of the servers list so we can alter it later.
+ #
+ nameservers = self.nameservers[:]
+ errors = []
+ if self.rotate:
+ random.shuffle(nameservers)
+ backoff = 0.10
+ while response is None:
+ if len(nameservers) == 0:
+ raise NoNameservers(request=request, errors=errors)
+ for nameserver in nameservers[:]:
+ timeout = self._compute_timeout(start, lifetime)
+ port = self.nameserver_ports.get(nameserver, self.port)
+ try:
+ tcp_attempt = tcp
+ if tcp:
+ response = dns.query.tcp(request, nameserver,
+ timeout, port,
+ source=source,
+ source_port=source_port)
+ else:
+ response = dns.query.udp(request, nameserver,
+ timeout, port,
+ source=source,
+ source_port=source_port)
+ if response.flags & dns.flags.TC:
+ # Response truncated; retry with TCP.
+ tcp_attempt = True
+ timeout = self._compute_timeout(start, lifetime)
+ response = \
+ dns.query.tcp(request, nameserver,
+ timeout, port,
+ source=source,
+ source_port=source_port)
+ except (socket.error, dns.exception.Timeout) as ex:
+ #
+ # Communication failure or timeout. Go to the
+ # next server
+ #
+ errors.append((nameserver, tcp_attempt, port, ex,
+ response))
+ response = None
+ continue
+ except dns.query.UnexpectedSource as ex:
+ #
+ # Who knows? Keep going.
+ #
+ errors.append((nameserver, tcp_attempt, port, ex,
+ response))
+ response = None
+ continue
+ except dns.exception.FormError as ex:
+ #
+ # We don't understand what this server is
+ # saying. Take it out of the mix and
+ # continue.
+ #
+ nameservers.remove(nameserver)
+ errors.append((nameserver, tcp_attempt, port, ex,
+ response))
+ response = None
+ continue
+ except EOFError as ex:
+ #
+ # We're using TCP and they hung up on us.
+ # Probably they don't support TCP (though
+ # they're supposed to!). Take it out of the
+ # mix and continue.
+ #
+ nameservers.remove(nameserver)
+ errors.append((nameserver, tcp_attempt, port, ex,
+ response))
+ response = None
+ continue
+ rcode = response.rcode()
+ if rcode == dns.rcode.YXDOMAIN:
+ ex = YXDOMAIN()
+ errors.append((nameserver, tcp_attempt, port, ex,
+ response))
+ raise ex
+ if rcode == dns.rcode.NOERROR or \
+ rcode == dns.rcode.NXDOMAIN:
+ break
+ #
+ # We got a response, but we're not happy with the
+ # rcode in it. Remove the server from the mix if
+ # the rcode isn't SERVFAIL.
+ #
+ if rcode != dns.rcode.SERVFAIL or not self.retry_servfail:
+ nameservers.remove(nameserver)
+ errors.append((nameserver, tcp_attempt, port,
+ dns.rcode.to_text(rcode), response))
+ response = None
+ if response is not None:
+ break
+ #
+ # All nameservers failed!
+ #
+ if len(nameservers) > 0:
+ #
+ # But we still have servers to try. Sleep a bit
+ # so we don't pound them!
+ #
+ timeout = self._compute_timeout(start, lifetime)
+ sleep_time = min(timeout, backoff)
+ backoff *= 2
+ time.sleep(sleep_time)
+ if response.rcode() == dns.rcode.NXDOMAIN:
+ nxdomain_responses[_qname] = response
+ continue
+ all_nxdomain = False
+ break
+ if all_nxdomain:
+ raise NXDOMAIN(qnames=qnames_to_try, responses=nxdomain_responses)
+ answer = Answer(_qname, rdtype, rdclass, response,
+ raise_on_no_answer)
+ if self.cache:
+ self.cache.put((_qname, rdtype, rdclass), answer)
+ return answer
+
+ def use_tsig(self, keyring, keyname=None,
+ algorithm=dns.tsig.default_algorithm):
+ """Add a TSIG signature to the query.
+
+ See the documentation of the Message class for a complete
+ description of the keyring dictionary.
+
+ *keyring*, a ``dict``, the TSIG keyring to use. If a
+ *keyring* is specified but a *keyname* is not, then the key
+ used will be the first key in the *keyring*. Note that the
+ order of keys in a dictionary is not defined, so applications
+ should supply a keyname when a keyring is used, unless they
+ know the keyring contains only one key.
+
+ *keyname*, a ``dns.name.Name`` or ``None``, the name of the TSIG key
+ to use; defaults to ``None``. The key must be defined in the keyring.
+
+ *algorithm*, a ``dns.name.Name``, the TSIG algorithm to use.
+ """
+
+ self.keyring = keyring
+ if keyname is None:
+ self.keyname = list(self.keyring.keys())[0]
+ else:
+ self.keyname = keyname
+ self.keyalgorithm = algorithm
+
+ def use_edns(self, edns, ednsflags, payload):
+ """Configure EDNS behavior.
+
+ *edns*, an ``int``, is the EDNS level to use. Specifying
+ ``None``, ``False``, or ``-1`` means "do not use EDNS", and in this case
+ the other parameters are ignored. Specifying ``True`` is
+ equivalent to specifying 0, i.e. "use EDNS0".
+
+ *ednsflags*, an ``int``, the EDNS flag values.
+
+ *payload*, an ``int``, is the EDNS sender's payload field, which is the
+ maximum size of UDP datagram the sender can handle. I.e. how big
+ a response to this message can be.
+ """
+
+ if edns is None:
+ edns = -1
+ self.edns = edns
+ self.ednsflags = ednsflags
+ self.payload = payload
+
+ def set_flags(self, flags):
+ """Overrides the default flags with your own.
+
+ *flags*, an ``int``, the message flags to use.
+ """
+
+ self.flags = flags
+
+
+#: The default resolver.
+default_resolver = None
+
+
+def get_default_resolver():
+ """Get the default resolver, initializing it if necessary."""
+ if default_resolver is None:
+ reset_default_resolver()
+ return default_resolver
+
+
+def reset_default_resolver():
+ """Re-initialize default resolver.
+
+ Note that the resolver configuration (i.e. /etc/resolv.conf on UNIX
+ systems) will be re-read immediately.
+ """
+
+ global default_resolver
+ default_resolver = Resolver()
+
+
+def query(qname, rdtype=dns.rdatatype.A, rdclass=dns.rdataclass.IN,
+ tcp=False, source=None, raise_on_no_answer=True,
+ source_port=0, lifetime=None):
+ """Query nameservers to find the answer to the question.
+
+ This is a convenience function that uses the default resolver
+ object to make the query.
+
+ See ``dns.resolver.Resolver.query`` for more information on the
+ parameters.
+ """
+
+ return get_default_resolver().query(qname, rdtype, rdclass, tcp, source,
+ raise_on_no_answer, source_port,
+ lifetime)
+
+
+def zone_for_name(name, rdclass=dns.rdataclass.IN, tcp=False, resolver=None):
+ """Find the name of the zone which contains the specified name.
+
+ *name*, an absolute ``dns.name.Name`` or ``text``, the query name.
+
+ *rdclass*, an ``int``, the query class.
+
+ *tcp*, a ``bool``. If ``True``, use TCP to make the query.
+
+ *resolver*, a ``dns.resolver.Resolver`` or ``None``, the resolver to use.
+ If ``None``, the default resolver is used.
+
+ Raises ``dns.resolver.NoRootSOA`` if there is no SOA RR at the DNS
+ root. (This is only likely to happen if you're using non-default
+ root servers in your network and they are misconfigured.)
+
+ Returns a ``dns.name.Name``.
+ """
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, dns.name.root)
+ if resolver is None:
+ resolver = get_default_resolver()
+ if not name.is_absolute():
+ raise NotAbsolute(name)
+ while 1:
+ try:
+ answer = resolver.query(name, dns.rdatatype.SOA, rdclass, tcp)
+ if answer.rrset.name == name:
+ return name
+ # otherwise we were CNAMEd or DNAMEd and need to look higher
+ except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
+ pass
+ try:
+ name = name.parent()
+ except dns.name.NoParent:
+ raise NoRootSOA
+
+#
+# Support for overriding the system resolver for all python code in the
+# running process.
+#
+
+_protocols_for_socktype = {
+ socket.SOCK_DGRAM: [socket.SOL_UDP],
+ socket.SOCK_STREAM: [socket.SOL_TCP],
+}
+
+_resolver = None
+_original_getaddrinfo = socket.getaddrinfo
+_original_getnameinfo = socket.getnameinfo
+_original_getfqdn = socket.getfqdn
+_original_gethostbyname = socket.gethostbyname
+_original_gethostbyname_ex = socket.gethostbyname_ex
+_original_gethostbyaddr = socket.gethostbyaddr
+
+
+def _getaddrinfo(host=None, service=None, family=socket.AF_UNSPEC, socktype=0,
+ proto=0, flags=0):
+ if flags & (socket.AI_ADDRCONFIG | socket.AI_V4MAPPED) != 0:
+ raise NotImplementedError
+ if host is None and service is None:
+ raise socket.gaierror(socket.EAI_NONAME)
+ v6addrs = []
+ v4addrs = []
+ canonical_name = None
+ try:
+ # Is host None or a V6 address literal?
+ if host is None:
+ canonical_name = 'localhost'
+ if flags & socket.AI_PASSIVE != 0:
+ v6addrs.append('::')
+ v4addrs.append('0.0.0.0')
+ else:
+ v6addrs.append('::1')
+ v4addrs.append('127.0.0.1')
+ else:
+ parts = host.split('%')
+ if len(parts) == 2:
+ ahost = parts[0]
+ else:
+ ahost = host
+ addr = dns.ipv6.inet_aton(ahost)
+ v6addrs.append(host)
+ canonical_name = host
+ except Exception:
+ try:
+ # Is it a V4 address literal?
+ addr = dns.ipv4.inet_aton(host)
+ v4addrs.append(host)
+ canonical_name = host
+ except Exception:
+ if flags & socket.AI_NUMERICHOST == 0:
+ try:
+ if family == socket.AF_INET6 or family == socket.AF_UNSPEC:
+ v6 = _resolver.query(host, dns.rdatatype.AAAA,
+ raise_on_no_answer=False)
+ # Note that setting host ensures we query the same name
+ # for A as we did for AAAA.
+ host = v6.qname
+ canonical_name = v6.canonical_name.to_text(True)
+ if v6.rrset is not None:
+ for rdata in v6.rrset:
+ v6addrs.append(rdata.address)
+ if family == socket.AF_INET or family == socket.AF_UNSPEC:
+ v4 = _resolver.query(host, dns.rdatatype.A,
+ raise_on_no_answer=False)
+ host = v4.qname
+ canonical_name = v4.canonical_name.to_text(True)
+ if v4.rrset is not None:
+ for rdata in v4.rrset:
+ v4addrs.append(rdata.address)
+ except dns.resolver.NXDOMAIN:
+ raise socket.gaierror(socket.EAI_NONAME)
+ except Exception:
+ raise socket.gaierror(socket.EAI_SYSTEM)
+ port = None
+ try:
+ # Is it a port literal?
+ if service is None:
+ port = 0
+ else:
+ port = int(service)
+ except Exception:
+ if flags & socket.AI_NUMERICSERV == 0:
+ try:
+ port = socket.getservbyname(service)
+ except Exception:
+ pass
+ if port is None:
+ raise socket.gaierror(socket.EAI_NONAME)
+ tuples = []
+ if socktype == 0:
+ socktypes = [socket.SOCK_DGRAM, socket.SOCK_STREAM]
+ else:
+ socktypes = [socktype]
+ if flags & socket.AI_CANONNAME != 0:
+ cname = canonical_name
+ else:
+ cname = ''
+ if family == socket.AF_INET6 or family == socket.AF_UNSPEC:
+ for addr in v6addrs:
+ for socktype in socktypes:
+ for proto in _protocols_for_socktype[socktype]:
+ tuples.append((socket.AF_INET6, socktype, proto,
+ cname, (addr, port, 0, 0)))
+ if family == socket.AF_INET or family == socket.AF_UNSPEC:
+ for addr in v4addrs:
+ for socktype in socktypes:
+ for proto in _protocols_for_socktype[socktype]:
+ tuples.append((socket.AF_INET, socktype, proto,
+ cname, (addr, port)))
+ if len(tuples) == 0:
+ raise socket.gaierror(socket.EAI_NONAME)
+ return tuples
+
+
+def _getnameinfo(sockaddr, flags=0):
+ host = sockaddr[0]
+ port = sockaddr[1]
+ if len(sockaddr) == 4:
+ scope = sockaddr[3]
+ family = socket.AF_INET6
+ else:
+ scope = None
+ family = socket.AF_INET
+ tuples = _getaddrinfo(host, port, family, socket.SOCK_STREAM,
+ socket.SOL_TCP, 0)
+ if len(tuples) > 1:
+ raise socket.error('sockaddr resolved to multiple addresses')
+ addr = tuples[0][4][0]
+ if flags & socket.NI_DGRAM:
+ pname = 'udp'
+ else:
+ pname = 'tcp'
+ qname = dns.reversename.from_address(addr)
+ if flags & socket.NI_NUMERICHOST == 0:
+ try:
+ answer = _resolver.query(qname, 'PTR')
+ hostname = answer.rrset[0].target.to_text(True)
+ except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
+ if flags & socket.NI_NAMEREQD:
+ raise socket.gaierror(socket.EAI_NONAME)
+ hostname = addr
+ if scope is not None:
+ hostname += '%' + str(scope)
+ else:
+ hostname = addr
+ if scope is not None:
+ hostname += '%' + str(scope)
+ if flags & socket.NI_NUMERICSERV:
+ service = str(port)
+ else:
+ service = socket.getservbyport(port, pname)
+ return (hostname, service)
+
+
+def _getfqdn(name=None):
+ if name is None:
+ name = socket.gethostname()
+ try:
+ return _getnameinfo(_getaddrinfo(name, 80)[0][4])[0]
+ except Exception:
+ return name
+
+
+def _gethostbyname(name):
+ return _gethostbyname_ex(name)[2][0]
+
+
+def _gethostbyname_ex(name):
+ aliases = []
+ addresses = []
+ tuples = _getaddrinfo(name, 0, socket.AF_INET, socket.SOCK_STREAM,
+ socket.SOL_TCP, socket.AI_CANONNAME)
+ canonical = tuples[0][3]
+ for item in tuples:
+ addresses.append(item[4][0])
+ # XXX we just ignore aliases
+ return (canonical, aliases, addresses)
+
+
+def _gethostbyaddr(ip):
+ try:
+ dns.ipv6.inet_aton(ip)
+ sockaddr = (ip, 80, 0, 0)
+ family = socket.AF_INET6
+ except Exception:
+ sockaddr = (ip, 80)
+ family = socket.AF_INET
+ (name, port) = _getnameinfo(sockaddr, socket.NI_NAMEREQD)
+ aliases = []
+ addresses = []
+ tuples = _getaddrinfo(name, 0, family, socket.SOCK_STREAM, socket.SOL_TCP,
+ socket.AI_CANONNAME)
+ canonical = tuples[0][3]
+ for item in tuples:
+ addresses.append(item[4][0])
+ # XXX we just ignore aliases
+ return (canonical, aliases, addresses)
+
+
+def override_system_resolver(resolver=None):
+ """Override the system resolver routines in the socket module with
+ versions which use dnspython's resolver.
+
+ This can be useful in testing situations where you want to control
+ the resolution behavior of python code without having to change
+ the system's resolver settings (e.g. /etc/resolv.conf).
+
+ The resolver to use may be specified; if it's not, the default
+ resolver will be used.
+
+ resolver, a ``dns.resolver.Resolver`` or ``None``, the resolver to use.
+ """
+
+ if resolver is None:
+ resolver = get_default_resolver()
+ global _resolver
+ _resolver = resolver
+ socket.getaddrinfo = _getaddrinfo
+ socket.getnameinfo = _getnameinfo
+ socket.getfqdn = _getfqdn
+ socket.gethostbyname = _gethostbyname
+ socket.gethostbyname_ex = _gethostbyname_ex
+ socket.gethostbyaddr = _gethostbyaddr
+
+
+def restore_system_resolver():
+ """Undo the effects of prior override_system_resolver()."""
+
+ global _resolver
+ _resolver = None
+ socket.getaddrinfo = _original_getaddrinfo
+ socket.getnameinfo = _original_getnameinfo
+ socket.getfqdn = _original_getfqdn
+ socket.gethostbyname = _original_gethostbyname
+ socket.gethostbyname_ex = _original_gethostbyname_ex
+ socket.gethostbyaddr = _original_gethostbyaddr
diff --git a/openpype/vendor/python/python_2/dns/reversename.py b/openpype/vendor/python/python_2/dns/reversename.py
new file mode 100644
index 0000000000..8f095fa91e
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/reversename.py
@@ -0,0 +1,96 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2006-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Reverse Map Names."""
+
+import binascii
+
+import dns.name
+import dns.ipv6
+import dns.ipv4
+
+from dns._compat import PY3
+
+ipv4_reverse_domain = dns.name.from_text('in-addr.arpa.')
+ipv6_reverse_domain = dns.name.from_text('ip6.arpa.')
+
+
+def from_address(text):
+ """Convert an IPv4 or IPv6 address in textual form into a Name object whose
+ value is the reverse-map domain name of the address.
+
+ *text*, a ``text``, is an IPv4 or IPv6 address in textual form
+ (e.g. '127.0.0.1', '::1')
+
+ Raises ``dns.exception.SyntaxError`` if the address is badly formed.
+
+ Returns a ``dns.name.Name``.
+ """
+
+ try:
+ v6 = dns.ipv6.inet_aton(text)
+ if dns.ipv6.is_mapped(v6):
+ if PY3:
+ parts = ['%d' % byte for byte in v6[12:]]
+ else:
+ parts = ['%d' % ord(byte) for byte in v6[12:]]
+ origin = ipv4_reverse_domain
+ else:
+ parts = [x for x in str(binascii.hexlify(v6).decode())]
+ origin = ipv6_reverse_domain
+ except Exception:
+ parts = ['%d' %
+ byte for byte in bytearray(dns.ipv4.inet_aton(text))]
+ origin = ipv4_reverse_domain
+ parts.reverse()
+ return dns.name.from_text('.'.join(parts), origin=origin)
+
+
+def to_address(name):
+ """Convert a reverse map domain name into textual address form.
+
+ *name*, a ``dns.name.Name``, an IPv4 or IPv6 address in reverse-map name
+ form.
+
+ Raises ``dns.exception.SyntaxError`` if the name does not have a
+ reverse-map form.
+
+ Returns a ``text``.
+ """
+
+ if name.is_subdomain(ipv4_reverse_domain):
+ name = name.relativize(ipv4_reverse_domain)
+ labels = list(name.labels)
+ labels.reverse()
+ text = b'.'.join(labels)
+ # run through inet_aton() to check syntax and make pretty.
+ return dns.ipv4.inet_ntoa(dns.ipv4.inet_aton(text))
+ elif name.is_subdomain(ipv6_reverse_domain):
+ name = name.relativize(ipv6_reverse_domain)
+ labels = list(name.labels)
+ labels.reverse()
+ parts = []
+ i = 0
+ l = len(labels)
+ while i < l:
+ parts.append(b''.join(labels[i:i + 4]))
+ i += 4
+ text = b':'.join(parts)
+ # run through inet_aton() to check syntax and make pretty.
+ return dns.ipv6.inet_ntoa(dns.ipv6.inet_aton(text))
+ else:
+ raise dns.exception.SyntaxError('unknown reverse-map address family')
diff --git a/openpype/vendor/python/python_2/dns/rrset.py b/openpype/vendor/python/python_2/dns/rrset.py
new file mode 100644
index 0000000000..a53ec324b8
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/rrset.py
@@ -0,0 +1,189 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS RRsets (an RRset is a named rdataset)"""
+
+
+import dns.name
+import dns.rdataset
+import dns.rdataclass
+import dns.renderer
+from ._compat import string_types
+
+
+class RRset(dns.rdataset.Rdataset):
+
+ """A DNS RRset (named rdataset).
+
+ RRset inherits from Rdataset, and RRsets can be treated as
+ Rdatasets in most cases. There are, however, a few notable
+ exceptions. RRsets have different to_wire() and to_text() method
+ arguments, reflecting the fact that RRsets always have an owner
+ name.
+ """
+
+ __slots__ = ['name', 'deleting']
+
+ def __init__(self, name, rdclass, rdtype, covers=dns.rdatatype.NONE,
+ deleting=None):
+ """Create a new RRset."""
+
+ super(RRset, self).__init__(rdclass, rdtype, covers)
+ self.name = name
+ self.deleting = deleting
+
+ def _clone(self):
+ obj = super(RRset, self)._clone()
+ obj.name = self.name
+ obj.deleting = self.deleting
+ return obj
+
+ def __repr__(self):
+ if self.covers == 0:
+ ctext = ''
+ else:
+ ctext = '(' + dns.rdatatype.to_text(self.covers) + ')'
+ if self.deleting is not None:
+ dtext = ' delete=' + dns.rdataclass.to_text(self.deleting)
+ else:
+ dtext = ''
+ return ''
+
+ def __str__(self):
+ return self.to_text()
+
+ def __eq__(self, other):
+ if not isinstance(other, RRset):
+ return False
+ if self.name != other.name:
+ return False
+ return super(RRset, self).__eq__(other)
+
+ def match(self, name, rdclass, rdtype, covers, deleting=None):
+ """Returns ``True`` if this rrset matches the specified class, type,
+ covers, and deletion state.
+ """
+
+ if not super(RRset, self).match(rdclass, rdtype, covers):
+ return False
+ if self.name != name or self.deleting != deleting:
+ return False
+ return True
+
+ def to_text(self, origin=None, relativize=True, **kw):
+ """Convert the RRset into DNS master file format.
+
+ See ``dns.name.Name.choose_relativity`` for more information
+ on how *origin* and *relativize* determine the way names
+ are emitted.
+
+ Any additional keyword arguments are passed on to the rdata
+ ``to_text()`` method.
+
+ *origin*, a ``dns.name.Name`` or ``None``, the origin for relative
+ names.
+
+ *relativize*, a ``bool``. If ``True``, names will be relativized
+ to *origin*.
+ """
+
+ return super(RRset, self).to_text(self.name, origin, relativize,
+ self.deleting, **kw)
+
+ def to_wire(self, file, compress=None, origin=None, **kw):
+ """Convert the RRset to wire format.
+
+ All keyword arguments are passed to ``dns.rdataset.to_wire()``; see
+ that function for details.
+
+ Returns an ``int``, the number of records emitted.
+ """
+
+ return super(RRset, self).to_wire(self.name, file, compress, origin,
+ self.deleting, **kw)
+
+ def to_rdataset(self):
+ """Convert an RRset into an Rdataset.
+
+ Returns a ``dns.rdataset.Rdataset``.
+ """
+ return dns.rdataset.from_rdata_list(self.ttl, list(self))
+
+
+def from_text_list(name, ttl, rdclass, rdtype, text_rdatas,
+ idna_codec=None):
+ """Create an RRset with the specified name, TTL, class, and type, and with
+ the specified list of rdatas in text format.
+
+ Returns a ``dns.rrset.RRset`` object.
+ """
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None, idna_codec=idna_codec)
+ if isinstance(rdclass, string_types):
+ rdclass = dns.rdataclass.from_text(rdclass)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ r = RRset(name, rdclass, rdtype)
+ r.update_ttl(ttl)
+ for t in text_rdatas:
+ rd = dns.rdata.from_text(r.rdclass, r.rdtype, t)
+ r.add(rd)
+ return r
+
+
+def from_text(name, ttl, rdclass, rdtype, *text_rdatas):
+ """Create an RRset with the specified name, TTL, class, and type and with
+ the specified rdatas in text format.
+
+ Returns a ``dns.rrset.RRset`` object.
+ """
+
+ return from_text_list(name, ttl, rdclass, rdtype, text_rdatas)
+
+
+def from_rdata_list(name, ttl, rdatas, idna_codec=None):
+ """Create an RRset with the specified name and TTL, and with
+ the specified list of rdata objects.
+
+ Returns a ``dns.rrset.RRset`` object.
+ """
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None, idna_codec=idna_codec)
+
+ if len(rdatas) == 0:
+ raise ValueError("rdata list must not be empty")
+ r = None
+ for rd in rdatas:
+ if r is None:
+ r = RRset(name, rd.rdclass, rd.rdtype)
+ r.update_ttl(ttl)
+ r.add(rd)
+ return r
+
+
+def from_rdata(name, ttl, *rdatas):
+ """Create an RRset with the specified name and TTL, and with
+ the specified rdata objects.
+
+ Returns a ``dns.rrset.RRset`` object.
+ """
+
+ return from_rdata_list(name, ttl, rdatas)
diff --git a/openpype/vendor/python/python_2/dns/set.py b/openpype/vendor/python/python_2/dns/set.py
new file mode 100644
index 0000000000..81329bf457
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/set.py
@@ -0,0 +1,261 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+class Set(object):
+
+ """A simple set class.
+
+ This class was originally used to deal with sets being missing in
+ ancient versions of python, but dnspython will continue to use it
+ as these sets are based on lists and are thus indexable, and this
+ ability is widely used in dnspython applications.
+ """
+
+ __slots__ = ['items']
+
+ def __init__(self, items=None):
+ """Initialize the set.
+
+ *items*, an iterable or ``None``, the initial set of items.
+ """
+
+ self.items = []
+ if items is not None:
+ for item in items:
+ self.add(item)
+
+ def __repr__(self):
+ return "dns.simpleset.Set(%s)" % repr(self.items)
+
+ def add(self, item):
+ """Add an item to the set.
+ """
+
+ if item not in self.items:
+ self.items.append(item)
+
+ def remove(self, item):
+ """Remove an item from the set.
+ """
+
+ self.items.remove(item)
+
+ def discard(self, item):
+ """Remove an item from the set if present.
+ """
+
+ try:
+ self.items.remove(item)
+ except ValueError:
+ pass
+
+ def _clone(self):
+ """Make a (shallow) copy of the set.
+
+ There is a 'clone protocol' that subclasses of this class
+ should use. To make a copy, first call your super's _clone()
+ method, and use the object returned as the new instance. Then
+ make shallow copies of the attributes defined in the subclass.
+
+ This protocol allows us to write the set algorithms that
+ return new instances (e.g. union) once, and keep using them in
+ subclasses.
+ """
+
+ cls = self.__class__
+ obj = cls.__new__(cls)
+ obj.items = list(self.items)
+ return obj
+
+ def __copy__(self):
+ """Make a (shallow) copy of the set.
+ """
+
+ return self._clone()
+
+ def copy(self):
+ """Make a (shallow) copy of the set.
+ """
+
+ return self._clone()
+
+ def union_update(self, other):
+ """Update the set, adding any elements from other which are not
+ already in the set.
+ """
+
+ if not isinstance(other, Set):
+ raise ValueError('other must be a Set instance')
+ if self is other:
+ return
+ for item in other.items:
+ self.add(item)
+
+ def intersection_update(self, other):
+ """Update the set, removing any elements from other which are not
+ in both sets.
+ """
+
+ if not isinstance(other, Set):
+ raise ValueError('other must be a Set instance')
+ if self is other:
+ return
+ # we make a copy of the list so that we can remove items from
+ # the list without breaking the iterator.
+ for item in list(self.items):
+ if item not in other.items:
+ self.items.remove(item)
+
+ def difference_update(self, other):
+ """Update the set, removing any elements from other which are in
+ the set.
+ """
+
+ if not isinstance(other, Set):
+ raise ValueError('other must be a Set instance')
+ if self is other:
+ self.items = []
+ else:
+ for item in other.items:
+ self.discard(item)
+
+ def union(self, other):
+ """Return a new set which is the union of ``self`` and ``other``.
+
+ Returns the same Set type as this set.
+ """
+
+ obj = self._clone()
+ obj.union_update(other)
+ return obj
+
+ def intersection(self, other):
+ """Return a new set which is the intersection of ``self`` and
+ ``other``.
+
+ Returns the same Set type as this set.
+ """
+
+ obj = self._clone()
+ obj.intersection_update(other)
+ return obj
+
+ def difference(self, other):
+ """Return a new set which ``self`` - ``other``, i.e. the items
+ in ``self`` which are not also in ``other``.
+
+ Returns the same Set type as this set.
+ """
+
+ obj = self._clone()
+ obj.difference_update(other)
+ return obj
+
+ def __or__(self, other):
+ return self.union(other)
+
+ def __and__(self, other):
+ return self.intersection(other)
+
+ def __add__(self, other):
+ return self.union(other)
+
+ def __sub__(self, other):
+ return self.difference(other)
+
+ def __ior__(self, other):
+ self.union_update(other)
+ return self
+
+ def __iand__(self, other):
+ self.intersection_update(other)
+ return self
+
+ def __iadd__(self, other):
+ self.union_update(other)
+ return self
+
+ def __isub__(self, other):
+ self.difference_update(other)
+ return self
+
+ def update(self, other):
+ """Update the set, adding any elements from other which are not
+ already in the set.
+
+ *other*, the collection of items with which to update the set, which
+ may be any iterable type.
+ """
+
+ for item in other:
+ self.add(item)
+
+ def clear(self):
+ """Make the set empty."""
+ self.items = []
+
+ def __eq__(self, other):
+ # Yes, this is inefficient but the sets we're dealing with are
+ # usually quite small, so it shouldn't hurt too much.
+ for item in self.items:
+ if item not in other.items:
+ return False
+ for item in other.items:
+ if item not in self.items:
+ return False
+ return True
+
+ def __ne__(self, other):
+ return not self.__eq__(other)
+
+ def __len__(self):
+ return len(self.items)
+
+ def __iter__(self):
+ return iter(self.items)
+
+ def __getitem__(self, i):
+ return self.items[i]
+
+ def __delitem__(self, i):
+ del self.items[i]
+
+ def issubset(self, other):
+ """Is this set a subset of *other*?
+
+ Returns a ``bool``.
+ """
+
+ if not isinstance(other, Set):
+ raise ValueError('other must be a Set instance')
+ for item in self.items:
+ if item not in other.items:
+ return False
+ return True
+
+ def issuperset(self, other):
+ """Is this set a superset of *other*?
+
+ Returns a ``bool``.
+ """
+
+ if not isinstance(other, Set):
+ raise ValueError('other must be a Set instance')
+ for item in other.items:
+ if item not in self.items:
+ return False
+ return True
diff --git a/openpype/vendor/python/python_2/dns/tokenizer.py b/openpype/vendor/python/python_2/dns/tokenizer.py
new file mode 100644
index 0000000000..880b71ce7a
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/tokenizer.py
@@ -0,0 +1,571 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""Tokenize DNS master file format"""
+
+from io import StringIO
+import sys
+
+import dns.exception
+import dns.name
+import dns.ttl
+from ._compat import long, text_type, binary_type
+
+_DELIMITERS = {
+ ' ': True,
+ '\t': True,
+ '\n': True,
+ ';': True,
+ '(': True,
+ ')': True,
+ '"': True}
+
+_QUOTING_DELIMITERS = {'"': True}
+
+EOF = 0
+EOL = 1
+WHITESPACE = 2
+IDENTIFIER = 3
+QUOTED_STRING = 4
+COMMENT = 5
+DELIMITER = 6
+
+
+class UngetBufferFull(dns.exception.DNSException):
+ """An attempt was made to unget a token when the unget buffer was full."""
+
+
+class Token(object):
+ """A DNS master file format token.
+
+ ttype: The token type
+ value: The token value
+ has_escape: Does the token value contain escapes?
+ """
+
+ def __init__(self, ttype, value='', has_escape=False):
+ """Initialize a token instance."""
+
+ self.ttype = ttype
+ self.value = value
+ self.has_escape = has_escape
+
+ def is_eof(self):
+ return self.ttype == EOF
+
+ def is_eol(self):
+ return self.ttype == EOL
+
+ def is_whitespace(self):
+ return self.ttype == WHITESPACE
+
+ def is_identifier(self):
+ return self.ttype == IDENTIFIER
+
+ def is_quoted_string(self):
+ return self.ttype == QUOTED_STRING
+
+ def is_comment(self):
+ return self.ttype == COMMENT
+
+ def is_delimiter(self):
+ return self.ttype == DELIMITER
+
+ def is_eol_or_eof(self):
+ return self.ttype == EOL or self.ttype == EOF
+
+ def __eq__(self, other):
+ if not isinstance(other, Token):
+ return False
+ return (self.ttype == other.ttype and
+ self.value == other.value)
+
+ def __ne__(self, other):
+ if not isinstance(other, Token):
+ return True
+ return (self.ttype != other.ttype or
+ self.value != other.value)
+
+ def __str__(self):
+ return '%d "%s"' % (self.ttype, self.value)
+
+ def unescape(self):
+ if not self.has_escape:
+ return self
+ unescaped = ''
+ l = len(self.value)
+ i = 0
+ while i < l:
+ c = self.value[i]
+ i += 1
+ if c == '\\':
+ if i >= l:
+ raise dns.exception.UnexpectedEnd
+ c = self.value[i]
+ i += 1
+ if c.isdigit():
+ if i >= l:
+ raise dns.exception.UnexpectedEnd
+ c2 = self.value[i]
+ i += 1
+ if i >= l:
+ raise dns.exception.UnexpectedEnd
+ c3 = self.value[i]
+ i += 1
+ if not (c2.isdigit() and c3.isdigit()):
+ raise dns.exception.SyntaxError
+ c = chr(int(c) * 100 + int(c2) * 10 + int(c3))
+ unescaped += c
+ return Token(self.ttype, unescaped)
+
+ # compatibility for old-style tuple tokens
+
+ def __len__(self):
+ return 2
+
+ def __iter__(self):
+ return iter((self.ttype, self.value))
+
+ def __getitem__(self, i):
+ if i == 0:
+ return self.ttype
+ elif i == 1:
+ return self.value
+ else:
+ raise IndexError
+
+
+class Tokenizer(object):
+ """A DNS master file format tokenizer.
+
+ A token object is basically a (type, value) tuple. The valid
+ types are EOF, EOL, WHITESPACE, IDENTIFIER, QUOTED_STRING,
+ COMMENT, and DELIMITER.
+
+ file: The file to tokenize
+
+ ungotten_char: The most recently ungotten character, or None.
+
+ ungotten_token: The most recently ungotten token, or None.
+
+ multiline: The current multiline level. This value is increased
+ by one every time a '(' delimiter is read, and decreased by one every time
+ a ')' delimiter is read.
+
+ quoting: This variable is true if the tokenizer is currently
+ reading a quoted string.
+
+ eof: This variable is true if the tokenizer has encountered EOF.
+
+ delimiters: The current delimiter dictionary.
+
+ line_number: The current line number
+
+ filename: A filename that will be returned by the where() method.
+ """
+
+ def __init__(self, f=sys.stdin, filename=None):
+ """Initialize a tokenizer instance.
+
+ f: The file to tokenize. The default is sys.stdin.
+ This parameter may also be a string, in which case the tokenizer
+ will take its input from the contents of the string.
+
+ filename: the name of the filename that the where() method
+ will return.
+ """
+
+ if isinstance(f, text_type):
+ f = StringIO(f)
+ if filename is None:
+ filename = ''
+ elif isinstance(f, binary_type):
+ f = StringIO(f.decode())
+ if filename is None:
+ filename = ''
+ else:
+ if filename is None:
+ if f is sys.stdin:
+ filename = ''
+ else:
+ filename = ''
+ self.file = f
+ self.ungotten_char = None
+ self.ungotten_token = None
+ self.multiline = 0
+ self.quoting = False
+ self.eof = False
+ self.delimiters = _DELIMITERS
+ self.line_number = 1
+ self.filename = filename
+
+ def _get_char(self):
+ """Read a character from input.
+ """
+
+ if self.ungotten_char is None:
+ if self.eof:
+ c = ''
+ else:
+ c = self.file.read(1)
+ if c == '':
+ self.eof = True
+ elif c == '\n':
+ self.line_number += 1
+ else:
+ c = self.ungotten_char
+ self.ungotten_char = None
+ return c
+
+ def where(self):
+ """Return the current location in the input.
+
+ Returns a (string, int) tuple. The first item is the filename of
+ the input, the second is the current line number.
+ """
+
+ return (self.filename, self.line_number)
+
+ def _unget_char(self, c):
+ """Unget a character.
+
+ The unget buffer for characters is only one character large; it is
+ an error to try to unget a character when the unget buffer is not
+ empty.
+
+ c: the character to unget
+ raises UngetBufferFull: there is already an ungotten char
+ """
+
+ if self.ungotten_char is not None:
+ raise UngetBufferFull
+ self.ungotten_char = c
+
+ def skip_whitespace(self):
+ """Consume input until a non-whitespace character is encountered.
+
+ The non-whitespace character is then ungotten, and the number of
+ whitespace characters consumed is returned.
+
+ If the tokenizer is in multiline mode, then newlines are whitespace.
+
+ Returns the number of characters skipped.
+ """
+
+ skipped = 0
+ while True:
+ c = self._get_char()
+ if c != ' ' and c != '\t':
+ if (c != '\n') or not self.multiline:
+ self._unget_char(c)
+ return skipped
+ skipped += 1
+
+ def get(self, want_leading=False, want_comment=False):
+ """Get the next token.
+
+ want_leading: If True, return a WHITESPACE token if the
+ first character read is whitespace. The default is False.
+
+ want_comment: If True, return a COMMENT token if the
+ first token read is a comment. The default is False.
+
+ Raises dns.exception.UnexpectedEnd: input ended prematurely
+
+ Raises dns.exception.SyntaxError: input was badly formed
+
+ Returns a Token.
+ """
+
+ if self.ungotten_token is not None:
+ token = self.ungotten_token
+ self.ungotten_token = None
+ if token.is_whitespace():
+ if want_leading:
+ return token
+ elif token.is_comment():
+ if want_comment:
+ return token
+ else:
+ return token
+ skipped = self.skip_whitespace()
+ if want_leading and skipped > 0:
+ return Token(WHITESPACE, ' ')
+ token = ''
+ ttype = IDENTIFIER
+ has_escape = False
+ while True:
+ c = self._get_char()
+ if c == '' or c in self.delimiters:
+ if c == '' and self.quoting:
+ raise dns.exception.UnexpectedEnd
+ if token == '' and ttype != QUOTED_STRING:
+ if c == '(':
+ self.multiline += 1
+ self.skip_whitespace()
+ continue
+ elif c == ')':
+ if self.multiline <= 0:
+ raise dns.exception.SyntaxError
+ self.multiline -= 1
+ self.skip_whitespace()
+ continue
+ elif c == '"':
+ if not self.quoting:
+ self.quoting = True
+ self.delimiters = _QUOTING_DELIMITERS
+ ttype = QUOTED_STRING
+ continue
+ else:
+ self.quoting = False
+ self.delimiters = _DELIMITERS
+ self.skip_whitespace()
+ continue
+ elif c == '\n':
+ return Token(EOL, '\n')
+ elif c == ';':
+ while 1:
+ c = self._get_char()
+ if c == '\n' or c == '':
+ break
+ token += c
+ if want_comment:
+ self._unget_char(c)
+ return Token(COMMENT, token)
+ elif c == '':
+ if self.multiline:
+ raise dns.exception.SyntaxError(
+ 'unbalanced parentheses')
+ return Token(EOF)
+ elif self.multiline:
+ self.skip_whitespace()
+ token = ''
+ continue
+ else:
+ return Token(EOL, '\n')
+ else:
+ # This code exists in case we ever want a
+ # delimiter to be returned. It never produces
+ # a token currently.
+ token = c
+ ttype = DELIMITER
+ else:
+ self._unget_char(c)
+ break
+ elif self.quoting:
+ if c == '\\':
+ c = self._get_char()
+ if c == '':
+ raise dns.exception.UnexpectedEnd
+ if c.isdigit():
+ c2 = self._get_char()
+ if c2 == '':
+ raise dns.exception.UnexpectedEnd
+ c3 = self._get_char()
+ if c == '':
+ raise dns.exception.UnexpectedEnd
+ if not (c2.isdigit() and c3.isdigit()):
+ raise dns.exception.SyntaxError
+ c = chr(int(c) * 100 + int(c2) * 10 + int(c3))
+ elif c == '\n':
+ raise dns.exception.SyntaxError('newline in quoted string')
+ elif c == '\\':
+ #
+ # It's an escape. Put it and the next character into
+ # the token; it will be checked later for goodness.
+ #
+ token += c
+ has_escape = True
+ c = self._get_char()
+ if c == '' or c == '\n':
+ raise dns.exception.UnexpectedEnd
+ token += c
+ if token == '' and ttype != QUOTED_STRING:
+ if self.multiline:
+ raise dns.exception.SyntaxError('unbalanced parentheses')
+ ttype = EOF
+ return Token(ttype, token, has_escape)
+
+ def unget(self, token):
+ """Unget a token.
+
+ The unget buffer for tokens is only one token large; it is
+ an error to try to unget a token when the unget buffer is not
+ empty.
+
+ token: the token to unget
+
+ Raises UngetBufferFull: there is already an ungotten token
+ """
+
+ if self.ungotten_token is not None:
+ raise UngetBufferFull
+ self.ungotten_token = token
+
+ def next(self):
+ """Return the next item in an iteration.
+
+ Returns a Token.
+ """
+
+ token = self.get()
+ if token.is_eof():
+ raise StopIteration
+ return token
+
+ __next__ = next
+
+ def __iter__(self):
+ return self
+
+ # Helpers
+
+ def get_int(self, base=10):
+ """Read the next token and interpret it as an integer.
+
+ Raises dns.exception.SyntaxError if not an integer.
+
+ Returns an int.
+ """
+
+ token = self.get().unescape()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError('expecting an identifier')
+ if not token.value.isdigit():
+ raise dns.exception.SyntaxError('expecting an integer')
+ return int(token.value, base)
+
+ def get_uint8(self):
+ """Read the next token and interpret it as an 8-bit unsigned
+ integer.
+
+ Raises dns.exception.SyntaxError if not an 8-bit unsigned integer.
+
+ Returns an int.
+ """
+
+ value = self.get_int()
+ if value < 0 or value > 255:
+ raise dns.exception.SyntaxError(
+ '%d is not an unsigned 8-bit integer' % value)
+ return value
+
+ def get_uint16(self, base=10):
+ """Read the next token and interpret it as a 16-bit unsigned
+ integer.
+
+ Raises dns.exception.SyntaxError if not a 16-bit unsigned integer.
+
+ Returns an int.
+ """
+
+ value = self.get_int(base=base)
+ if value < 0 or value > 65535:
+ if base == 8:
+ raise dns.exception.SyntaxError(
+ '%o is not an octal unsigned 16-bit integer' % value)
+ else:
+ raise dns.exception.SyntaxError(
+ '%d is not an unsigned 16-bit integer' % value)
+ return value
+
+ def get_uint32(self):
+ """Read the next token and interpret it as a 32-bit unsigned
+ integer.
+
+ Raises dns.exception.SyntaxError if not a 32-bit unsigned integer.
+
+ Returns an int.
+ """
+
+ token = self.get().unescape()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError('expecting an identifier')
+ if not token.value.isdigit():
+ raise dns.exception.SyntaxError('expecting an integer')
+ value = long(token.value)
+ if value < 0 or value > long(4294967296):
+ raise dns.exception.SyntaxError(
+ '%d is not an unsigned 32-bit integer' % value)
+ return value
+
+ def get_string(self, origin=None):
+ """Read the next token and interpret it as a string.
+
+ Raises dns.exception.SyntaxError if not a string.
+
+ Returns a string.
+ """
+
+ token = self.get().unescape()
+ if not (token.is_identifier() or token.is_quoted_string()):
+ raise dns.exception.SyntaxError('expecting a string')
+ return token.value
+
+ def get_identifier(self, origin=None):
+ """Read the next token, which should be an identifier.
+
+ Raises dns.exception.SyntaxError if not an identifier.
+
+ Returns a string.
+ """
+
+ token = self.get().unescape()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError('expecting an identifier')
+ return token.value
+
+ def get_name(self, origin=None):
+ """Read the next token and interpret it as a DNS name.
+
+ Raises dns.exception.SyntaxError if not a name.
+
+ Returns a dns.name.Name.
+ """
+
+ token = self.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError('expecting an identifier')
+ return dns.name.from_text(token.value, origin)
+
+ def get_eol(self):
+ """Read the next token and raise an exception if it isn't EOL or
+ EOF.
+
+ Returns a string.
+ """
+
+ token = self.get()
+ if not token.is_eol_or_eof():
+ raise dns.exception.SyntaxError(
+ 'expected EOL or EOF, got %d "%s"' % (token.ttype,
+ token.value))
+ return token.value
+
+ def get_ttl(self):
+ """Read the next token and interpret it as a DNS TTL.
+
+ Raises dns.exception.SyntaxError or dns.ttl.BadTTL if not an
+ identifier or badly formed.
+
+ Returns an int.
+ """
+
+ token = self.get().unescape()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError('expecting an identifier')
+ return dns.ttl.from_text(token.value)
diff --git a/openpype/vendor/python/python_2/dns/tsig.py b/openpype/vendor/python/python_2/dns/tsig.py
new file mode 100644
index 0000000000..3daa387855
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/tsig.py
@@ -0,0 +1,236 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2001-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS TSIG support."""
+
+import hashlib
+import hmac
+import struct
+
+import dns.exception
+import dns.rdataclass
+import dns.name
+from ._compat import long, string_types, text_type
+
+class BadTime(dns.exception.DNSException):
+
+ """The current time is not within the TSIG's validity time."""
+
+
+class BadSignature(dns.exception.DNSException):
+
+ """The TSIG signature fails to verify."""
+
+
+class PeerError(dns.exception.DNSException):
+
+ """Base class for all TSIG errors generated by the remote peer"""
+
+
+class PeerBadKey(PeerError):
+
+ """The peer didn't know the key we used"""
+
+
+class PeerBadSignature(PeerError):
+
+ """The peer didn't like the signature we sent"""
+
+
+class PeerBadTime(PeerError):
+
+ """The peer didn't like the time we sent"""
+
+
+class PeerBadTruncation(PeerError):
+
+ """The peer didn't like amount of truncation in the TSIG we sent"""
+
+# TSIG Algorithms
+
+HMAC_MD5 = dns.name.from_text("HMAC-MD5.SIG-ALG.REG.INT")
+HMAC_SHA1 = dns.name.from_text("hmac-sha1")
+HMAC_SHA224 = dns.name.from_text("hmac-sha224")
+HMAC_SHA256 = dns.name.from_text("hmac-sha256")
+HMAC_SHA384 = dns.name.from_text("hmac-sha384")
+HMAC_SHA512 = dns.name.from_text("hmac-sha512")
+
+_hashes = {
+ HMAC_SHA224: hashlib.sha224,
+ HMAC_SHA256: hashlib.sha256,
+ HMAC_SHA384: hashlib.sha384,
+ HMAC_SHA512: hashlib.sha512,
+ HMAC_SHA1: hashlib.sha1,
+ HMAC_MD5: hashlib.md5,
+}
+
+default_algorithm = HMAC_MD5
+
+BADSIG = 16
+BADKEY = 17
+BADTIME = 18
+BADTRUNC = 22
+
+
+def sign(wire, keyname, secret, time, fudge, original_id, error,
+ other_data, request_mac, ctx=None, multi=False, first=True,
+ algorithm=default_algorithm):
+ """Return a (tsig_rdata, mac, ctx) tuple containing the HMAC TSIG rdata
+ for the input parameters, the HMAC MAC calculated by applying the
+ TSIG signature algorithm, and the TSIG digest context.
+ @rtype: (string, string, hmac.HMAC object)
+ @raises ValueError: I{other_data} is too long
+ @raises NotImplementedError: I{algorithm} is not supported
+ """
+
+ if isinstance(other_data, text_type):
+ other_data = other_data.encode()
+ (algorithm_name, digestmod) = get_algorithm(algorithm)
+ if first:
+ ctx = hmac.new(secret, digestmod=digestmod)
+ ml = len(request_mac)
+ if ml > 0:
+ ctx.update(struct.pack('!H', ml))
+ ctx.update(request_mac)
+ id = struct.pack('!H', original_id)
+ ctx.update(id)
+ ctx.update(wire[2:])
+ if first:
+ ctx.update(keyname.to_digestable())
+ ctx.update(struct.pack('!H', dns.rdataclass.ANY))
+ ctx.update(struct.pack('!I', 0))
+ long_time = time + long(0)
+ upper_time = (long_time >> 32) & long(0xffff)
+ lower_time = long_time & long(0xffffffff)
+ time_mac = struct.pack('!HIH', upper_time, lower_time, fudge)
+ pre_mac = algorithm_name + time_mac
+ ol = len(other_data)
+ if ol > 65535:
+ raise ValueError('TSIG Other Data is > 65535 bytes')
+ post_mac = struct.pack('!HH', error, ol) + other_data
+ if first:
+ ctx.update(pre_mac)
+ ctx.update(post_mac)
+ else:
+ ctx.update(time_mac)
+ mac = ctx.digest()
+ mpack = struct.pack('!H', len(mac))
+ tsig_rdata = pre_mac + mpack + mac + id + post_mac
+ if multi:
+ ctx = hmac.new(secret, digestmod=digestmod)
+ ml = len(mac)
+ ctx.update(struct.pack('!H', ml))
+ ctx.update(mac)
+ else:
+ ctx = None
+ return (tsig_rdata, mac, ctx)
+
+
+def hmac_md5(wire, keyname, secret, time, fudge, original_id, error,
+ other_data, request_mac, ctx=None, multi=False, first=True,
+ algorithm=default_algorithm):
+ return sign(wire, keyname, secret, time, fudge, original_id, error,
+ other_data, request_mac, ctx, multi, first, algorithm)
+
+
+def validate(wire, keyname, secret, now, request_mac, tsig_start, tsig_rdata,
+ tsig_rdlen, ctx=None, multi=False, first=True):
+ """Validate the specified TSIG rdata against the other input parameters.
+
+ @raises FormError: The TSIG is badly formed.
+ @raises BadTime: There is too much time skew between the client and the
+ server.
+ @raises BadSignature: The TSIG signature did not validate
+ @rtype: hmac.HMAC object"""
+
+ (adcount,) = struct.unpack("!H", wire[10:12])
+ if adcount == 0:
+ raise dns.exception.FormError
+ adcount -= 1
+ new_wire = wire[0:10] + struct.pack("!H", adcount) + wire[12:tsig_start]
+ current = tsig_rdata
+ (aname, used) = dns.name.from_wire(wire, current)
+ current = current + used
+ (upper_time, lower_time, fudge, mac_size) = \
+ struct.unpack("!HIHH", wire[current:current + 10])
+ time = ((upper_time + long(0)) << 32) + (lower_time + long(0))
+ current += 10
+ mac = wire[current:current + mac_size]
+ current += mac_size
+ (original_id, error, other_size) = \
+ struct.unpack("!HHH", wire[current:current + 6])
+ current += 6
+ other_data = wire[current:current + other_size]
+ current += other_size
+ if current != tsig_rdata + tsig_rdlen:
+ raise dns.exception.FormError
+ if error != 0:
+ if error == BADSIG:
+ raise PeerBadSignature
+ elif error == BADKEY:
+ raise PeerBadKey
+ elif error == BADTIME:
+ raise PeerBadTime
+ elif error == BADTRUNC:
+ raise PeerBadTruncation
+ else:
+ raise PeerError('unknown TSIG error code %d' % error)
+ time_low = time - fudge
+ time_high = time + fudge
+ if now < time_low or now > time_high:
+ raise BadTime
+ (junk, our_mac, ctx) = sign(new_wire, keyname, secret, time, fudge,
+ original_id, error, other_data,
+ request_mac, ctx, multi, first, aname)
+ if our_mac != mac:
+ raise BadSignature
+ return ctx
+
+
+def get_algorithm(algorithm):
+ """Returns the wire format string and the hash module to use for the
+ specified TSIG algorithm
+
+ @rtype: (string, hash constructor)
+ @raises NotImplementedError: I{algorithm} is not supported
+ """
+
+ if isinstance(algorithm, string_types):
+ algorithm = dns.name.from_text(algorithm)
+
+ try:
+ return (algorithm.to_digestable(), _hashes[algorithm])
+ except KeyError:
+ raise NotImplementedError("TSIG algorithm " + str(algorithm) +
+ " is not supported")
+
+
+def get_algorithm_and_mac(wire, tsig_rdata, tsig_rdlen):
+ """Return the tsig algorithm for the specified tsig_rdata
+ @raises FormError: The TSIG is badly formed.
+ """
+ current = tsig_rdata
+ (aname, used) = dns.name.from_wire(wire, current)
+ current = current + used
+ (upper_time, lower_time, fudge, mac_size) = \
+ struct.unpack("!HIHH", wire[current:current + 10])
+ current += 10
+ mac = wire[current:current + mac_size]
+ current += mac_size
+ if current > tsig_rdata + tsig_rdlen:
+ raise dns.exception.FormError
+ return (aname, mac)
diff --git a/openpype/vendor/python/python_2/dns/tsigkeyring.py b/openpype/vendor/python/python_2/dns/tsigkeyring.py
new file mode 100644
index 0000000000..5e5fe1cbe4
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/tsigkeyring.py
@@ -0,0 +1,50 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""A place to store TSIG keys."""
+
+from dns._compat import maybe_decode, maybe_encode
+
+import base64
+
+import dns.name
+
+
+def from_text(textring):
+ """Convert a dictionary containing (textual DNS name, base64 secret) pairs
+ into a binary keyring which has (dns.name.Name, binary secret) pairs.
+ @rtype: dict"""
+
+ keyring = {}
+ for keytext in textring:
+ keyname = dns.name.from_text(keytext)
+ secret = base64.decodestring(maybe_encode(textring[keytext]))
+ keyring[keyname] = secret
+ return keyring
+
+
+def to_text(keyring):
+ """Convert a dictionary containing (dns.name.Name, binary secret) pairs
+ into a text keyring which has (textual DNS name, base64 secret) pairs.
+ @rtype: dict"""
+
+ textring = {}
+ for keyname in keyring:
+ keytext = maybe_decode(keyname.to_text())
+ secret = maybe_decode(base64.encodestring(keyring[keyname]))
+ textring[keytext] = secret
+ return textring
diff --git a/openpype/vendor/python/python_2/dns/ttl.py b/openpype/vendor/python/python_2/dns/ttl.py
new file mode 100644
index 0000000000..4be16bee5b
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/ttl.py
@@ -0,0 +1,70 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS TTL conversion."""
+
+import dns.exception
+from ._compat import long
+
+
+class BadTTL(dns.exception.SyntaxError):
+ """DNS TTL value is not well-formed."""
+
+
+def from_text(text):
+ """Convert the text form of a TTL to an integer.
+
+ The BIND 8 units syntax for TTLs (e.g. '1w6d4h3m10s') is supported.
+
+ *text*, a ``text``, the textual TTL.
+
+ Raises ``dns.ttl.BadTTL`` if the TTL is not well-formed.
+
+ Returns an ``int``.
+ """
+
+ if text.isdigit():
+ total = long(text)
+ else:
+ if not text[0].isdigit():
+ raise BadTTL
+ total = long(0)
+ current = long(0)
+ for c in text:
+ if c.isdigit():
+ current *= 10
+ current += long(c)
+ else:
+ c = c.lower()
+ if c == 'w':
+ total += current * long(604800)
+ elif c == 'd':
+ total += current * long(86400)
+ elif c == 'h':
+ total += current * long(3600)
+ elif c == 'm':
+ total += current * long(60)
+ elif c == 's':
+ total += current
+ else:
+ raise BadTTL("unknown unit '%s'" % c)
+ current = 0
+ if not current == 0:
+ raise BadTTL("trailing integer")
+ if total < long(0) or total > long(2147483647):
+ raise BadTTL("TTL should be between 0 and 2^31 - 1 (inclusive)")
+ return total
diff --git a/openpype/vendor/python/python_2/dns/update.py b/openpype/vendor/python/python_2/dns/update.py
new file mode 100644
index 0000000000..96a00d5dbe
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/update.py
@@ -0,0 +1,279 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Dynamic Update Support"""
+
+
+import dns.message
+import dns.name
+import dns.opcode
+import dns.rdata
+import dns.rdataclass
+import dns.rdataset
+import dns.tsig
+from ._compat import string_types
+
+
+class Update(dns.message.Message):
+
+ def __init__(self, zone, rdclass=dns.rdataclass.IN, keyring=None,
+ keyname=None, keyalgorithm=dns.tsig.default_algorithm):
+ """Initialize a new DNS Update object.
+
+ See the documentation of the Message class for a complete
+ description of the keyring dictionary.
+
+ *zone*, a ``dns.name.Name`` or ``text``, the zone which is being
+ updated.
+
+ *rdclass*, an ``int`` or ``text``, the class of the zone.
+
+ *keyring*, a ``dict``, the TSIG keyring to use. If a
+ *keyring* is specified but a *keyname* is not, then the key
+ used will be the first key in the *keyring*. Note that the
+ order of keys in a dictionary is not defined, so applications
+ should supply a keyname when a keyring is used, unless they
+ know the keyring contains only one key.
+
+ *keyname*, a ``dns.name.Name`` or ``None``, the name of the TSIG key
+ to use; defaults to ``None``. The key must be defined in the keyring.
+
+ *keyalgorithm*, a ``dns.name.Name``, the TSIG algorithm to use.
+ """
+ super(Update, self).__init__()
+ self.flags |= dns.opcode.to_flags(dns.opcode.UPDATE)
+ if isinstance(zone, string_types):
+ zone = dns.name.from_text(zone)
+ self.origin = zone
+ if isinstance(rdclass, string_types):
+ rdclass = dns.rdataclass.from_text(rdclass)
+ self.zone_rdclass = rdclass
+ self.find_rrset(self.question, self.origin, rdclass, dns.rdatatype.SOA,
+ create=True, force_unique=True)
+ if keyring is not None:
+ self.use_tsig(keyring, keyname, algorithm=keyalgorithm)
+
+ def _add_rr(self, name, ttl, rd, deleting=None, section=None):
+ """Add a single RR to the update section."""
+
+ if section is None:
+ section = self.authority
+ covers = rd.covers()
+ rrset = self.find_rrset(section, name, self.zone_rdclass, rd.rdtype,
+ covers, deleting, True, True)
+ rrset.add(rd, ttl)
+
+ def _add(self, replace, section, name, *args):
+ """Add records.
+
+ *replace* is the replacement mode. If ``False``,
+ RRs are added to an existing RRset; if ``True``, the RRset
+ is replaced with the specified contents. The second
+ argument is the section to add to. The third argument
+ is always a name. The other arguments can be:
+
+ - rdataset...
+
+ - ttl, rdata...
+
+ - ttl, rdtype, string...
+ """
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None)
+ if isinstance(args[0], dns.rdataset.Rdataset):
+ for rds in args:
+ if replace:
+ self.delete(name, rds.rdtype)
+ for rd in rds:
+ self._add_rr(name, rds.ttl, rd, section=section)
+ else:
+ args = list(args)
+ ttl = int(args.pop(0))
+ if isinstance(args[0], dns.rdata.Rdata):
+ if replace:
+ self.delete(name, args[0].rdtype)
+ for rd in args:
+ self._add_rr(name, ttl, rd, section=section)
+ else:
+ rdtype = args.pop(0)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if replace:
+ self.delete(name, rdtype)
+ for s in args:
+ rd = dns.rdata.from_text(self.zone_rdclass, rdtype, s,
+ self.origin)
+ self._add_rr(name, ttl, rd, section=section)
+
+ def add(self, name, *args):
+ """Add records.
+
+ The first argument is always a name. The other
+ arguments can be:
+
+ - rdataset...
+
+ - ttl, rdata...
+
+ - ttl, rdtype, string...
+ """
+
+ self._add(False, self.authority, name, *args)
+
+ def delete(self, name, *args):
+ """Delete records.
+
+ The first argument is always a name. The other
+ arguments can be:
+
+ - *empty*
+
+ - rdataset...
+
+ - rdata...
+
+ - rdtype, [string...]
+ """
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None)
+ if len(args) == 0:
+ self.find_rrset(self.authority, name, dns.rdataclass.ANY,
+ dns.rdatatype.ANY, dns.rdatatype.NONE,
+ dns.rdatatype.ANY, True, True)
+ elif isinstance(args[0], dns.rdataset.Rdataset):
+ for rds in args:
+ for rd in rds:
+ self._add_rr(name, 0, rd, dns.rdataclass.NONE)
+ else:
+ args = list(args)
+ if isinstance(args[0], dns.rdata.Rdata):
+ for rd in args:
+ self._add_rr(name, 0, rd, dns.rdataclass.NONE)
+ else:
+ rdtype = args.pop(0)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if len(args) == 0:
+ self.find_rrset(self.authority, name,
+ self.zone_rdclass, rdtype,
+ dns.rdatatype.NONE,
+ dns.rdataclass.ANY,
+ True, True)
+ else:
+ for s in args:
+ rd = dns.rdata.from_text(self.zone_rdclass, rdtype, s,
+ self.origin)
+ self._add_rr(name, 0, rd, dns.rdataclass.NONE)
+
+ def replace(self, name, *args):
+ """Replace records.
+
+ The first argument is always a name. The other
+ arguments can be:
+
+ - rdataset...
+
+ - ttl, rdata...
+
+ - ttl, rdtype, string...
+
+ Note that if you want to replace the entire node, you should do
+ a delete of the name followed by one or more calls to add.
+ """
+
+ self._add(True, self.authority, name, *args)
+
+ def present(self, name, *args):
+ """Require that an owner name (and optionally an rdata type,
+ or specific rdataset) exists as a prerequisite to the
+ execution of the update.
+
+ The first argument is always a name.
+ The other arguments can be:
+
+ - rdataset...
+
+ - rdata...
+
+ - rdtype, string...
+ """
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None)
+ if len(args) == 0:
+ self.find_rrset(self.answer, name,
+ dns.rdataclass.ANY, dns.rdatatype.ANY,
+ dns.rdatatype.NONE, None,
+ True, True)
+ elif isinstance(args[0], dns.rdataset.Rdataset) or \
+ isinstance(args[0], dns.rdata.Rdata) or \
+ len(args) > 1:
+ if not isinstance(args[0], dns.rdataset.Rdataset):
+ # Add a 0 TTL
+ args = list(args)
+ args.insert(0, 0)
+ self._add(False, self.answer, name, *args)
+ else:
+ rdtype = args[0]
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ self.find_rrset(self.answer, name,
+ dns.rdataclass.ANY, rdtype,
+ dns.rdatatype.NONE, None,
+ True, True)
+
+ def absent(self, name, rdtype=None):
+ """Require that an owner name (and optionally an rdata type) does
+ not exist as a prerequisite to the execution of the update."""
+
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None)
+ if rdtype is None:
+ self.find_rrset(self.answer, name,
+ dns.rdataclass.NONE, dns.rdatatype.ANY,
+ dns.rdatatype.NONE, None,
+ True, True)
+ else:
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ self.find_rrset(self.answer, name,
+ dns.rdataclass.NONE, rdtype,
+ dns.rdatatype.NONE, None,
+ True, True)
+
+ def to_wire(self, origin=None, max_size=65535):
+ """Return a string containing the update in DNS compressed wire
+ format.
+
+ *origin*, a ``dns.name.Name`` or ``None``, the origin to be
+ appended to any relative names. If *origin* is ``None``, then
+ the origin of the ``dns.update.Update`` message object is used
+ (i.e. the *zone* parameter passed when the Update object was
+ created).
+
+ *max_size*, an ``int``, the maximum size of the wire format
+ output; default is 0, which means "the message's request
+ payload, if nonzero, or 65535".
+
+ Returns a ``binary``.
+ """
+
+ if origin is None:
+ origin = self.origin
+ return super(Update, self).to_wire(origin, max_size)
diff --git a/openpype/vendor/python/python_2/dns/version.py b/openpype/vendor/python/python_2/dns/version.py
new file mode 100644
index 0000000000..f116904b46
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/version.py
@@ -0,0 +1,43 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""dnspython release version information."""
+
+#: MAJOR
+MAJOR = 1
+#: MINOR
+MINOR = 16
+#: MICRO
+MICRO = 0
+#: RELEASELEVEL
+RELEASELEVEL = 0x0f
+#: SERIAL
+SERIAL = 0
+
+if RELEASELEVEL == 0x0f:
+ #: version
+ version = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
+elif RELEASELEVEL == 0x00:
+ version = '%d.%d.%dx%d' % \
+ (MAJOR, MINOR, MICRO, SERIAL)
+else:
+ version = '%d.%d.%d%x%d' % \
+ (MAJOR, MINOR, MICRO, RELEASELEVEL, SERIAL)
+
+#: hexversion
+hexversion = MAJOR << 24 | MINOR << 16 | MICRO << 8 | RELEASELEVEL << 4 | \
+ SERIAL
diff --git a/openpype/vendor/python/python_2/dns/wiredata.py b/openpype/vendor/python/python_2/dns/wiredata.py
new file mode 100644
index 0000000000..ea3c1e67d6
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/wiredata.py
@@ -0,0 +1,103 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2011,2017 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Wire Data Helper"""
+
+import dns.exception
+from ._compat import binary_type, string_types, PY2
+
+# Figure out what constant python passes for an unspecified slice bound.
+# It's supposed to be sys.maxint, yet on 64-bit windows sys.maxint is 2^31 - 1
+# but Python uses 2^63 - 1 as the constant. Rather than making pointless
+# extra comparisons, duplicating code, or weakening WireData, we just figure
+# out what constant Python will use.
+
+
+class _SliceUnspecifiedBound(binary_type):
+
+ def __getitem__(self, key):
+ return key.stop
+
+ if PY2:
+ def __getslice__(self, i, j): # pylint: disable=getslice-method
+ return self.__getitem__(slice(i, j))
+
+_unspecified_bound = _SliceUnspecifiedBound()[1:]
+
+
+class WireData(binary_type):
+ # WireData is a binary type with stricter slicing
+
+ def __getitem__(self, key):
+ try:
+ if isinstance(key, slice):
+ # make sure we are not going outside of valid ranges,
+ # do stricter control of boundaries than python does
+ # by default
+ start = key.start
+ stop = key.stop
+
+ if PY2:
+ if stop == _unspecified_bound:
+ # handle the case where the right bound is unspecified
+ stop = len(self)
+
+ if start < 0 or stop < 0:
+ raise dns.exception.FormError
+ # If it's not an empty slice, access left and right bounds
+ # to make sure they're valid
+ if start != stop:
+ super(WireData, self).__getitem__(start)
+ super(WireData, self).__getitem__(stop - 1)
+ else:
+ for index in (start, stop):
+ if index is None:
+ continue
+ elif abs(index) > len(self):
+ raise dns.exception.FormError
+
+ return WireData(super(WireData, self).__getitem__(
+ slice(start, stop)))
+ return bytearray(self.unwrap())[key]
+ except IndexError:
+ raise dns.exception.FormError
+
+ if PY2:
+ def __getslice__(self, i, j): # pylint: disable=getslice-method
+ return self.__getitem__(slice(i, j))
+
+ def __iter__(self):
+ i = 0
+ while 1:
+ try:
+ yield self[i]
+ i += 1
+ except dns.exception.FormError:
+ raise StopIteration
+
+ def unwrap(self):
+ return binary_type(self)
+
+
+def maybe_wrap(wire):
+ if isinstance(wire, WireData):
+ return wire
+ elif isinstance(wire, binary_type):
+ return WireData(wire)
+ elif isinstance(wire, string_types):
+ return WireData(wire.encode())
+ raise ValueError("unhandled type %s" % type(wire))
diff --git a/openpype/vendor/python/python_2/dns/zone.py b/openpype/vendor/python/python_2/dns/zone.py
new file mode 100644
index 0000000000..1e2fe78168
--- /dev/null
+++ b/openpype/vendor/python/python_2/dns/zone.py
@@ -0,0 +1,1127 @@
+# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
+
+# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose with or without fee is hereby granted,
+# provided that the above copyright notice and this permission notice
+# appear in all copies.
+#
+# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
+# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
+# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""DNS Zones."""
+
+from __future__ import generators
+
+import sys
+import re
+import os
+from io import BytesIO
+
+import dns.exception
+import dns.name
+import dns.node
+import dns.rdataclass
+import dns.rdatatype
+import dns.rdata
+import dns.rdtypes.ANY.SOA
+import dns.rrset
+import dns.tokenizer
+import dns.ttl
+import dns.grange
+from ._compat import string_types, text_type, PY3
+
+
+class BadZone(dns.exception.DNSException):
+
+ """The DNS zone is malformed."""
+
+
+class NoSOA(BadZone):
+
+ """The DNS zone has no SOA RR at its origin."""
+
+
+class NoNS(BadZone):
+
+ """The DNS zone has no NS RRset at its origin."""
+
+
+class UnknownOrigin(BadZone):
+
+ """The DNS zone's origin is unknown."""
+
+
+class Zone(object):
+
+ """A DNS zone.
+
+ A Zone is a mapping from names to nodes. The zone object may be
+ treated like a Python dictionary, e.g. zone[name] will retrieve
+ the node associated with that name. The I{name} may be a
+ dns.name.Name object, or it may be a string. In the either case,
+ if the name is relative it is treated as relative to the origin of
+ the zone.
+
+ @ivar rdclass: The zone's rdata class; the default is class IN.
+ @type rdclass: int
+ @ivar origin: The origin of the zone.
+ @type origin: dns.name.Name object
+ @ivar nodes: A dictionary mapping the names of nodes in the zone to the
+ nodes themselves.
+ @type nodes: dict
+ @ivar relativize: should names in the zone be relativized?
+ @type relativize: bool
+ @cvar node_factory: the factory used to create a new node
+ @type node_factory: class or callable
+ """
+
+ node_factory = dns.node.Node
+
+ __slots__ = ['rdclass', 'origin', 'nodes', 'relativize']
+
+ def __init__(self, origin, rdclass=dns.rdataclass.IN, relativize=True):
+ """Initialize a zone object.
+
+ @param origin: The origin of the zone.
+ @type origin: dns.name.Name object
+ @param rdclass: The zone's rdata class; the default is class IN.
+ @type rdclass: int"""
+
+ if origin is not None:
+ if isinstance(origin, string_types):
+ origin = dns.name.from_text(origin)
+ elif not isinstance(origin, dns.name.Name):
+ raise ValueError("origin parameter must be convertible to a "
+ "DNS name")
+ if not origin.is_absolute():
+ raise ValueError("origin parameter must be an absolute name")
+ self.origin = origin
+ self.rdclass = rdclass
+ self.nodes = {}
+ self.relativize = relativize
+
+ def __eq__(self, other):
+ """Two zones are equal if they have the same origin, class, and
+ nodes.
+ @rtype: bool
+ """
+
+ if not isinstance(other, Zone):
+ return False
+ if self.rdclass != other.rdclass or \
+ self.origin != other.origin or \
+ self.nodes != other.nodes:
+ return False
+ return True
+
+ def __ne__(self, other):
+ """Are two zones not equal?
+ @rtype: bool
+ """
+
+ return not self.__eq__(other)
+
+ def _validate_name(self, name):
+ if isinstance(name, string_types):
+ name = dns.name.from_text(name, None)
+ elif not isinstance(name, dns.name.Name):
+ raise KeyError("name parameter must be convertible to a DNS name")
+ if name.is_absolute():
+ if not name.is_subdomain(self.origin):
+ raise KeyError(
+ "name parameter must be a subdomain of the zone origin")
+ if self.relativize:
+ name = name.relativize(self.origin)
+ return name
+
+ def __getitem__(self, key):
+ key = self._validate_name(key)
+ return self.nodes[key]
+
+ def __setitem__(self, key, value):
+ key = self._validate_name(key)
+ self.nodes[key] = value
+
+ def __delitem__(self, key):
+ key = self._validate_name(key)
+ del self.nodes[key]
+
+ def __iter__(self):
+ return self.nodes.__iter__()
+
+ def iterkeys(self):
+ if PY3:
+ return self.nodes.keys() # pylint: disable=dict-keys-not-iterating
+ else:
+ return self.nodes.iterkeys() # pylint: disable=dict-iter-method
+
+ def keys(self):
+ return self.nodes.keys() # pylint: disable=dict-keys-not-iterating
+
+ def itervalues(self):
+ if PY3:
+ return self.nodes.values() # pylint: disable=dict-values-not-iterating
+ else:
+ return self.nodes.itervalues() # pylint: disable=dict-iter-method
+
+ def values(self):
+ return self.nodes.values() # pylint: disable=dict-values-not-iterating
+
+ def items(self):
+ return self.nodes.items() # pylint: disable=dict-items-not-iterating
+
+ iteritems = items
+
+ def get(self, key):
+ key = self._validate_name(key)
+ return self.nodes.get(key)
+
+ def __contains__(self, other):
+ return other in self.nodes
+
+ def find_node(self, name, create=False):
+ """Find a node in the zone, possibly creating it.
+
+ @param name: the name of the node to find
+ @type name: dns.name.Name object or string
+ @param create: should the node be created if it doesn't exist?
+ @type create: bool
+ @raises KeyError: the name is not known and create was not specified.
+ @rtype: dns.node.Node object
+ """
+
+ name = self._validate_name(name)
+ node = self.nodes.get(name)
+ if node is None:
+ if not create:
+ raise KeyError
+ node = self.node_factory()
+ self.nodes[name] = node
+ return node
+
+ def get_node(self, name, create=False):
+ """Get a node in the zone, possibly creating it.
+
+ This method is like L{find_node}, except it returns None instead
+ of raising an exception if the node does not exist and creation
+ has not been requested.
+
+ @param name: the name of the node to find
+ @type name: dns.name.Name object or string
+ @param create: should the node be created if it doesn't exist?
+ @type create: bool
+ @rtype: dns.node.Node object or None
+ """
+
+ try:
+ node = self.find_node(name, create)
+ except KeyError:
+ node = None
+ return node
+
+ def delete_node(self, name):
+ """Delete the specified node if it exists.
+
+ It is not an error if the node does not exist.
+ """
+
+ name = self._validate_name(name)
+ if name in self.nodes:
+ del self.nodes[name]
+
+ def find_rdataset(self, name, rdtype, covers=dns.rdatatype.NONE,
+ create=False):
+ """Look for rdata with the specified name and type in the zone,
+ and return an rdataset encapsulating it.
+
+ The I{name}, I{rdtype}, and I{covers} parameters may be
+ strings, in which case they will be converted to their proper
+ type.
+
+ The rdataset returned is not a copy; changes to it will change
+ the zone.
+
+ KeyError is raised if the name or type are not found.
+ Use L{get_rdataset} if you want to have None returned instead.
+
+ @param name: the owner name to look for
+ @type name: DNS.name.Name object or string
+ @param rdtype: the rdata type desired
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ @param create: should the node and rdataset be created if they do not
+ exist?
+ @type create: bool
+ @raises KeyError: the node or rdata could not be found
+ @rtype: dns.rdataset.Rdataset object
+ """
+
+ name = self._validate_name(name)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if isinstance(covers, string_types):
+ covers = dns.rdatatype.from_text(covers)
+ node = self.find_node(name, create)
+ return node.find_rdataset(self.rdclass, rdtype, covers, create)
+
+ def get_rdataset(self, name, rdtype, covers=dns.rdatatype.NONE,
+ create=False):
+ """Look for rdata with the specified name and type in the zone,
+ and return an rdataset encapsulating it.
+
+ The I{name}, I{rdtype}, and I{covers} parameters may be
+ strings, in which case they will be converted to their proper
+ type.
+
+ The rdataset returned is not a copy; changes to it will change
+ the zone.
+
+ None is returned if the name or type are not found.
+ Use L{find_rdataset} if you want to have KeyError raised instead.
+
+ @param name: the owner name to look for
+ @type name: DNS.name.Name object or string
+ @param rdtype: the rdata type desired
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ @param create: should the node and rdataset be created if they do not
+ exist?
+ @type create: bool
+ @rtype: dns.rdataset.Rdataset object or None
+ """
+
+ try:
+ rdataset = self.find_rdataset(name, rdtype, covers, create)
+ except KeyError:
+ rdataset = None
+ return rdataset
+
+ def delete_rdataset(self, name, rdtype, covers=dns.rdatatype.NONE):
+ """Delete the rdataset matching I{rdtype} and I{covers}, if it
+ exists at the node specified by I{name}.
+
+ The I{name}, I{rdtype}, and I{covers} parameters may be
+ strings, in which case they will be converted to their proper
+ type.
+
+ It is not an error if the node does not exist, or if there is no
+ matching rdataset at the node.
+
+ If the node has no rdatasets after the deletion, it will itself
+ be deleted.
+
+ @param name: the owner name to look for
+ @type name: DNS.name.Name object or string
+ @param rdtype: the rdata type desired
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ """
+
+ name = self._validate_name(name)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if isinstance(covers, string_types):
+ covers = dns.rdatatype.from_text(covers)
+ node = self.get_node(name)
+ if node is not None:
+ node.delete_rdataset(self.rdclass, rdtype, covers)
+ if len(node) == 0:
+ self.delete_node(name)
+
+ def replace_rdataset(self, name, replacement):
+ """Replace an rdataset at name.
+
+ It is not an error if there is no rdataset matching I{replacement}.
+
+ Ownership of the I{replacement} object is transferred to the zone;
+ in other words, this method does not store a copy of I{replacement}
+ at the node, it stores I{replacement} itself.
+
+ If the I{name} node does not exist, it is created.
+
+ @param name: the owner name
+ @type name: DNS.name.Name object or string
+ @param replacement: the replacement rdataset
+ @type replacement: dns.rdataset.Rdataset
+ """
+
+ if replacement.rdclass != self.rdclass:
+ raise ValueError('replacement.rdclass != zone.rdclass')
+ node = self.find_node(name, True)
+ node.replace_rdataset(replacement)
+
+ def find_rrset(self, name, rdtype, covers=dns.rdatatype.NONE):
+ """Look for rdata with the specified name and type in the zone,
+ and return an RRset encapsulating it.
+
+ The I{name}, I{rdtype}, and I{covers} parameters may be
+ strings, in which case they will be converted to their proper
+ type.
+
+ This method is less efficient than the similar
+ L{find_rdataset} because it creates an RRset instead of
+ returning the matching rdataset. It may be more convenient
+ for some uses since it returns an object which binds the owner
+ name to the rdata.
+
+ This method may not be used to create new nodes or rdatasets;
+ use L{find_rdataset} instead.
+
+ KeyError is raised if the name or type are not found.
+ Use L{get_rrset} if you want to have None returned instead.
+
+ @param name: the owner name to look for
+ @type name: DNS.name.Name object or string
+ @param rdtype: the rdata type desired
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ @raises KeyError: the node or rdata could not be found
+ @rtype: dns.rrset.RRset object
+ """
+
+ name = self._validate_name(name)
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if isinstance(covers, string_types):
+ covers = dns.rdatatype.from_text(covers)
+ rdataset = self.nodes[name].find_rdataset(self.rdclass, rdtype, covers)
+ rrset = dns.rrset.RRset(name, self.rdclass, rdtype, covers)
+ rrset.update(rdataset)
+ return rrset
+
+ def get_rrset(self, name, rdtype, covers=dns.rdatatype.NONE):
+ """Look for rdata with the specified name and type in the zone,
+ and return an RRset encapsulating it.
+
+ The I{name}, I{rdtype}, and I{covers} parameters may be
+ strings, in which case they will be converted to their proper
+ type.
+
+ This method is less efficient than the similar L{get_rdataset}
+ because it creates an RRset instead of returning the matching
+ rdataset. It may be more convenient for some uses since it
+ returns an object which binds the owner name to the rdata.
+
+ This method may not be used to create new nodes or rdatasets;
+ use L{find_rdataset} instead.
+
+ None is returned if the name or type are not found.
+ Use L{find_rrset} if you want to have KeyError raised instead.
+
+ @param name: the owner name to look for
+ @type name: DNS.name.Name object or string
+ @param rdtype: the rdata type desired
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ @rtype: dns.rrset.RRset object
+ """
+
+ try:
+ rrset = self.find_rrset(name, rdtype, covers)
+ except KeyError:
+ rrset = None
+ return rrset
+
+ def iterate_rdatasets(self, rdtype=dns.rdatatype.ANY,
+ covers=dns.rdatatype.NONE):
+ """Return a generator which yields (name, rdataset) tuples for
+ all rdatasets in the zone which have the specified I{rdtype}
+ and I{covers}. If I{rdtype} is dns.rdatatype.ANY, the default,
+ then all rdatasets will be matched.
+
+ @param rdtype: int or string
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ """
+
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if isinstance(covers, string_types):
+ covers = dns.rdatatype.from_text(covers)
+ for (name, node) in self.iteritems(): # pylint: disable=dict-iter-method
+ for rds in node:
+ if rdtype == dns.rdatatype.ANY or \
+ (rds.rdtype == rdtype and rds.covers == covers):
+ yield (name, rds)
+
+ def iterate_rdatas(self, rdtype=dns.rdatatype.ANY,
+ covers=dns.rdatatype.NONE):
+ """Return a generator which yields (name, ttl, rdata) tuples for
+ all rdatas in the zone which have the specified I{rdtype}
+ and I{covers}. If I{rdtype} is dns.rdatatype.ANY, the default,
+ then all rdatas will be matched.
+
+ @param rdtype: int or string
+ @type rdtype: int or string
+ @param covers: the covered type (defaults to None)
+ @type covers: int or string
+ """
+
+ if isinstance(rdtype, string_types):
+ rdtype = dns.rdatatype.from_text(rdtype)
+ if isinstance(covers, string_types):
+ covers = dns.rdatatype.from_text(covers)
+ for (name, node) in self.iteritems(): # pylint: disable=dict-iter-method
+ for rds in node:
+ if rdtype == dns.rdatatype.ANY or \
+ (rds.rdtype == rdtype and rds.covers == covers):
+ for rdata in rds:
+ yield (name, rds.ttl, rdata)
+
+ def to_file(self, f, sorted=True, relativize=True, nl=None):
+ """Write a zone to a file.
+
+ @param f: file or string. If I{f} is a string, it is treated
+ as the name of a file to open.
+ @param sorted: if True, the file will be written with the
+ names sorted in DNSSEC order from least to greatest. Otherwise
+ the names will be written in whatever order they happen to have
+ in the zone's dictionary.
+ @param relativize: if True, domain names in the output will be
+ relativized to the zone's origin (if possible).
+ @type relativize: bool
+ @param nl: The end of line string. If not specified, the
+ output will use the platform's native end-of-line marker (i.e.
+ LF on POSIX, CRLF on Windows, CR on Macintosh).
+ @type nl: string or None
+ """
+
+ if isinstance(f, string_types):
+ f = open(f, 'wb')
+ want_close = True
+ else:
+ want_close = False
+
+ # must be in this way, f.encoding may contain None, or even attribute
+ # may not be there
+ file_enc = getattr(f, 'encoding', None)
+ if file_enc is None:
+ file_enc = 'utf-8'
+
+ if nl is None:
+ nl_b = os.linesep.encode(file_enc) # binary mode, '\n' is not enough
+ nl = u'\n'
+ elif isinstance(nl, string_types):
+ nl_b = nl.encode(file_enc)
+ else:
+ nl_b = nl
+ nl = nl.decode()
+
+ try:
+ if sorted:
+ names = list(self.keys())
+ names.sort()
+ else:
+ names = self.iterkeys() # pylint: disable=dict-iter-method
+ for n in names:
+ l = self[n].to_text(n, origin=self.origin,
+ relativize=relativize)
+ if isinstance(l, text_type):
+ l_b = l.encode(file_enc)
+ else:
+ l_b = l
+ l = l.decode()
+
+ try:
+ f.write(l_b)
+ f.write(nl_b)
+ except TypeError: # textual mode
+ f.write(l)
+ f.write(nl)
+ finally:
+ if want_close:
+ f.close()
+
+ def to_text(self, sorted=True, relativize=True, nl=None):
+ """Return a zone's text as though it were written to a file.
+
+ @param sorted: if True, the file will be written with the
+ names sorted in DNSSEC order from least to greatest. Otherwise
+ the names will be written in whatever order they happen to have
+ in the zone's dictionary.
+ @param relativize: if True, domain names in the output will be
+ relativized to the zone's origin (if possible).
+ @type relativize: bool
+ @param nl: The end of line string. If not specified, the
+ output will use the platform's native end-of-line marker (i.e.
+ LF on POSIX, CRLF on Windows, CR on Macintosh).
+ @type nl: string or None
+ """
+ temp_buffer = BytesIO()
+ self.to_file(temp_buffer, sorted, relativize, nl)
+ return_value = temp_buffer.getvalue()
+ temp_buffer.close()
+ return return_value
+
+ def check_origin(self):
+ """Do some simple checking of the zone's origin.
+
+ @raises dns.zone.NoSOA: there is no SOA RR
+ @raises dns.zone.NoNS: there is no NS RRset
+ @raises KeyError: there is no origin node
+ """
+ if self.relativize:
+ name = dns.name.empty
+ else:
+ name = self.origin
+ if self.get_rdataset(name, dns.rdatatype.SOA) is None:
+ raise NoSOA
+ if self.get_rdataset(name, dns.rdatatype.NS) is None:
+ raise NoNS
+
+
+class _MasterReader(object):
+
+ """Read a DNS master file
+
+ @ivar tok: The tokenizer
+ @type tok: dns.tokenizer.Tokenizer object
+ @ivar last_ttl: The last seen explicit TTL for an RR
+ @type last_ttl: int
+ @ivar last_ttl_known: Has last TTL been detected
+ @type last_ttl_known: bool
+ @ivar default_ttl: The default TTL from a $TTL directive or SOA RR
+ @type default_ttl: int
+ @ivar default_ttl_known: Has default TTL been detected
+ @type default_ttl_known: bool
+ @ivar last_name: The last name read
+ @type last_name: dns.name.Name object
+ @ivar current_origin: The current origin
+ @type current_origin: dns.name.Name object
+ @ivar relativize: should names in the zone be relativized?
+ @type relativize: bool
+ @ivar zone: the zone
+ @type zone: dns.zone.Zone object
+ @ivar saved_state: saved reader state (used when processing $INCLUDE)
+ @type saved_state: list of (tokenizer, current_origin, last_name, file,
+ last_ttl, last_ttl_known, default_ttl, default_ttl_known) tuples.
+ @ivar current_file: the file object of the $INCLUDed file being parsed
+ (None if no $INCLUDE is active).
+ @ivar allow_include: is $INCLUDE allowed?
+ @type allow_include: bool
+ @ivar check_origin: should sanity checks of the origin node be done?
+ The default is True.
+ @type check_origin: bool
+ """
+
+ def __init__(self, tok, origin, rdclass, relativize, zone_factory=Zone,
+ allow_include=False, check_origin=True):
+ if isinstance(origin, string_types):
+ origin = dns.name.from_text(origin)
+ self.tok = tok
+ self.current_origin = origin
+ self.relativize = relativize
+ self.last_ttl = 0
+ self.last_ttl_known = False
+ self.default_ttl = 0
+ self.default_ttl_known = False
+ self.last_name = self.current_origin
+ self.zone = zone_factory(origin, rdclass, relativize=relativize)
+ self.saved_state = []
+ self.current_file = None
+ self.allow_include = allow_include
+ self.check_origin = check_origin
+
+ def _eat_line(self):
+ while 1:
+ token = self.tok.get()
+ if token.is_eol_or_eof():
+ break
+
+ def _rr_line(self):
+ """Process one line from a DNS master file."""
+ # Name
+ if self.current_origin is None:
+ raise UnknownOrigin
+ token = self.tok.get(want_leading=True)
+ if not token.is_whitespace():
+ self.last_name = dns.name.from_text(
+ token.value, self.current_origin)
+ else:
+ token = self.tok.get()
+ if token.is_eol_or_eof():
+ # treat leading WS followed by EOL/EOF as if they were EOL/EOF.
+ return
+ self.tok.unget(token)
+ name = self.last_name
+ if not name.is_subdomain(self.zone.origin):
+ self._eat_line()
+ return
+ if self.relativize:
+ name = name.relativize(self.zone.origin)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ # TTL
+ try:
+ ttl = dns.ttl.from_text(token.value)
+ self.last_ttl = ttl
+ self.last_ttl_known = True
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except dns.ttl.BadTTL:
+ if not (self.last_ttl_known or self.default_ttl_known):
+ raise dns.exception.SyntaxError("Missing default TTL value")
+ if self.default_ttl_known:
+ ttl = self.default_ttl
+ else:
+ ttl = self.last_ttl
+ # Class
+ try:
+ rdclass = dns.rdataclass.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except dns.exception.SyntaxError:
+ raise dns.exception.SyntaxError
+ except Exception:
+ rdclass = self.zone.rdclass
+ if rdclass != self.zone.rdclass:
+ raise dns.exception.SyntaxError("RR class is not zone's class")
+ # Type
+ try:
+ rdtype = dns.rdatatype.from_text(token.value)
+ except:
+ raise dns.exception.SyntaxError(
+ "unknown rdatatype '%s'" % token.value)
+ n = self.zone.nodes.get(name)
+ if n is None:
+ n = self.zone.node_factory()
+ self.zone.nodes[name] = n
+ try:
+ rd = dns.rdata.from_text(rdclass, rdtype, self.tok,
+ self.current_origin, False)
+ except dns.exception.SyntaxError:
+ # Catch and reraise.
+ (ty, va) = sys.exc_info()[:2]
+ raise va
+ except:
+ # All exceptions that occur in the processing of rdata
+ # are treated as syntax errors. This is not strictly
+ # correct, but it is correct almost all of the time.
+ # We convert them to syntax errors so that we can emit
+ # helpful filename:line info.
+ (ty, va) = sys.exc_info()[:2]
+ raise dns.exception.SyntaxError(
+ "caught exception {}: {}".format(str(ty), str(va)))
+
+ if not self.default_ttl_known and isinstance(rd, dns.rdtypes.ANY.SOA.SOA):
+ # The pre-RFC2308 and pre-BIND9 behavior inherits the zone default
+ # TTL from the SOA minttl if no $TTL statement is present before the
+ # SOA is parsed.
+ self.default_ttl = rd.minimum
+ self.default_ttl_known = True
+
+ rd.choose_relativity(self.zone.origin, self.relativize)
+ covers = rd.covers()
+ rds = n.find_rdataset(rdclass, rdtype, covers, True)
+ rds.add(rd, ttl)
+
+ def _parse_modify(self, side):
+ # Here we catch everything in '{' '}' in a group so we can replace it
+ # with ''.
+ is_generate1 = re.compile("^.*\$({(\+|-?)(\d+),(\d+),(.)}).*$")
+ is_generate2 = re.compile("^.*\$({(\+|-?)(\d+)}).*$")
+ is_generate3 = re.compile("^.*\$({(\+|-?)(\d+),(\d+)}).*$")
+ # Sometimes there are modifiers in the hostname. These come after
+ # the dollar sign. They are in the form: ${offset[,width[,base]]}.
+ # Make names
+ g1 = is_generate1.match(side)
+ if g1:
+ mod, sign, offset, width, base = g1.groups()
+ if sign == '':
+ sign = '+'
+ g2 = is_generate2.match(side)
+ if g2:
+ mod, sign, offset = g2.groups()
+ if sign == '':
+ sign = '+'
+ width = 0
+ base = 'd'
+ g3 = is_generate3.match(side)
+ if g3:
+ mod, sign, offset, width = g1.groups()
+ if sign == '':
+ sign = '+'
+ width = g1.groups()[2]
+ base = 'd'
+
+ if not (g1 or g2 or g3):
+ mod = ''
+ sign = '+'
+ offset = 0
+ width = 0
+ base = 'd'
+
+ if base != 'd':
+ raise NotImplementedError()
+
+ return mod, sign, offset, width, base
+
+ def _generate_line(self):
+ # range lhs [ttl] [class] type rhs [ comment ]
+ """Process one line containing the GENERATE statement from a DNS
+ master file."""
+ if self.current_origin is None:
+ raise UnknownOrigin
+
+ token = self.tok.get()
+ # Range (required)
+ try:
+ start, stop, step = dns.grange.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except:
+ raise dns.exception.SyntaxError
+
+ # lhs (required)
+ try:
+ lhs = token.value
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except:
+ raise dns.exception.SyntaxError
+
+ # TTL
+ try:
+ ttl = dns.ttl.from_text(token.value)
+ self.last_ttl = ttl
+ self.last_ttl_known = True
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except dns.ttl.BadTTL:
+ if not (self.last_ttl_known or self.default_ttl_known):
+ raise dns.exception.SyntaxError("Missing default TTL value")
+ if self.default_ttl_known:
+ ttl = self.default_ttl
+ else:
+ ttl = self.last_ttl
+ # Class
+ try:
+ rdclass = dns.rdataclass.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except dns.exception.SyntaxError:
+ raise dns.exception.SyntaxError
+ except Exception:
+ rdclass = self.zone.rdclass
+ if rdclass != self.zone.rdclass:
+ raise dns.exception.SyntaxError("RR class is not zone's class")
+ # Type
+ try:
+ rdtype = dns.rdatatype.from_text(token.value)
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError
+ except Exception:
+ raise dns.exception.SyntaxError("unknown rdatatype '%s'" %
+ token.value)
+
+ # lhs (required)
+ try:
+ rhs = token.value
+ except:
+ raise dns.exception.SyntaxError
+
+ lmod, lsign, loffset, lwidth, lbase = self._parse_modify(lhs)
+ rmod, rsign, roffset, rwidth, rbase = self._parse_modify(rhs)
+ for i in range(start, stop + 1, step):
+ # +1 because bind is inclusive and python is exclusive
+
+ if lsign == u'+':
+ lindex = i + int(loffset)
+ elif lsign == u'-':
+ lindex = i - int(loffset)
+
+ if rsign == u'-':
+ rindex = i - int(roffset)
+ elif rsign == u'+':
+ rindex = i + int(roffset)
+
+ lzfindex = str(lindex).zfill(int(lwidth))
+ rzfindex = str(rindex).zfill(int(rwidth))
+
+ name = lhs.replace(u'$%s' % (lmod), lzfindex)
+ rdata = rhs.replace(u'$%s' % (rmod), rzfindex)
+
+ self.last_name = dns.name.from_text(name, self.current_origin)
+ name = self.last_name
+ if not name.is_subdomain(self.zone.origin):
+ self._eat_line()
+ return
+ if self.relativize:
+ name = name.relativize(self.zone.origin)
+
+ n = self.zone.nodes.get(name)
+ if n is None:
+ n = self.zone.node_factory()
+ self.zone.nodes[name] = n
+ try:
+ rd = dns.rdata.from_text(rdclass, rdtype, rdata,
+ self.current_origin, False)
+ except dns.exception.SyntaxError:
+ # Catch and reraise.
+ (ty, va) = sys.exc_info()[:2]
+ raise va
+ except:
+ # All exceptions that occur in the processing of rdata
+ # are treated as syntax errors. This is not strictly
+ # correct, but it is correct almost all of the time.
+ # We convert them to syntax errors so that we can emit
+ # helpful filename:line info.
+ (ty, va) = sys.exc_info()[:2]
+ raise dns.exception.SyntaxError("caught exception %s: %s" %
+ (str(ty), str(va)))
+
+ rd.choose_relativity(self.zone.origin, self.relativize)
+ covers = rd.covers()
+ rds = n.find_rdataset(rdclass, rdtype, covers, True)
+ rds.add(rd, ttl)
+
+ def read(self):
+ """Read a DNS master file and build a zone object.
+
+ @raises dns.zone.NoSOA: No SOA RR was found at the zone origin
+ @raises dns.zone.NoNS: No NS RRset was found at the zone origin
+ """
+
+ try:
+ while 1:
+ token = self.tok.get(True, True)
+ if token.is_eof():
+ if self.current_file is not None:
+ self.current_file.close()
+ if len(self.saved_state) > 0:
+ (self.tok,
+ self.current_origin,
+ self.last_name,
+ self.current_file,
+ self.last_ttl,
+ self.last_ttl_known,
+ self.default_ttl,
+ self.default_ttl_known) = self.saved_state.pop(-1)
+ continue
+ break
+ elif token.is_eol():
+ continue
+ elif token.is_comment():
+ self.tok.get_eol()
+ continue
+ elif token.value[0] == u'$':
+ c = token.value.upper()
+ if c == u'$TTL':
+ token = self.tok.get()
+ if not token.is_identifier():
+ raise dns.exception.SyntaxError("bad $TTL")
+ self.default_ttl = dns.ttl.from_text(token.value)
+ self.default_ttl_known = True
+ self.tok.get_eol()
+ elif c == u'$ORIGIN':
+ self.current_origin = self.tok.get_name()
+ self.tok.get_eol()
+ if self.zone.origin is None:
+ self.zone.origin = self.current_origin
+ elif c == u'$INCLUDE' and self.allow_include:
+ token = self.tok.get()
+ filename = token.value
+ token = self.tok.get()
+ if token.is_identifier():
+ new_origin =\
+ dns.name.from_text(token.value,
+ self.current_origin)
+ self.tok.get_eol()
+ elif not token.is_eol_or_eof():
+ raise dns.exception.SyntaxError(
+ "bad origin in $INCLUDE")
+ else:
+ new_origin = self.current_origin
+ self.saved_state.append((self.tok,
+ self.current_origin,
+ self.last_name,
+ self.current_file,
+ self.last_ttl,
+ self.last_ttl_known,
+ self.default_ttl,
+ self.default_ttl_known))
+ self.current_file = open(filename, 'r')
+ self.tok = dns.tokenizer.Tokenizer(self.current_file,
+ filename)
+ self.current_origin = new_origin
+ elif c == u'$GENERATE':
+ self._generate_line()
+ else:
+ raise dns.exception.SyntaxError(
+ "Unknown master file directive '" + c + "'")
+ continue
+ self.tok.unget(token)
+ self._rr_line()
+ except dns.exception.SyntaxError as detail:
+ (filename, line_number) = self.tok.where()
+ if detail is None:
+ detail = "syntax error"
+ raise dns.exception.SyntaxError(
+ "%s:%d: %s" % (filename, line_number, detail))
+
+ # Now that we're done reading, do some basic checking of the zone.
+ if self.check_origin:
+ self.zone.check_origin()
+
+
+def from_text(text, origin=None, rdclass=dns.rdataclass.IN,
+ relativize=True, zone_factory=Zone, filename=None,
+ allow_include=False, check_origin=True):
+ """Build a zone object from a master file format string.
+
+ @param text: the master file format input
+ @type text: string.
+ @param origin: The origin of the zone; if not specified, the first
+ $ORIGIN statement in the master file will determine the origin of the
+ zone.
+ @type origin: dns.name.Name object or string
+ @param rdclass: The zone's rdata class; the default is class IN.
+ @type rdclass: int
+ @param relativize: should names be relativized? The default is True
+ @type relativize: bool
+ @param zone_factory: The zone factory to use
+ @type zone_factory: function returning a Zone
+ @param filename: The filename to emit when describing where an error
+ occurred; the default is ''.
+ @type filename: string
+ @param allow_include: is $INCLUDE allowed?
+ @type allow_include: bool
+ @param check_origin: should sanity checks of the origin node be done?
+ The default is True.
+ @type check_origin: bool
+ @raises dns.zone.NoSOA: No SOA RR was found at the zone origin
+ @raises dns.zone.NoNS: No NS RRset was found at the zone origin
+ @rtype: dns.zone.Zone object
+ """
+
+ # 'text' can also be a file, but we don't publish that fact
+ # since it's an implementation detail. The official file
+ # interface is from_file().
+
+ if filename is None:
+ filename = ''
+ tok = dns.tokenizer.Tokenizer(text, filename)
+ reader = _MasterReader(tok, origin, rdclass, relativize, zone_factory,
+ allow_include=allow_include,
+ check_origin=check_origin)
+ reader.read()
+ return reader.zone
+
+
+def from_file(f, origin=None, rdclass=dns.rdataclass.IN,
+ relativize=True, zone_factory=Zone, filename=None,
+ allow_include=True, check_origin=True):
+ """Read a master file and build a zone object.
+
+ @param f: file or string. If I{f} is a string, it is treated
+ as the name of a file to open.
+ @param origin: The origin of the zone; if not specified, the first
+ $ORIGIN statement in the master file will determine the origin of the
+ zone.
+ @type origin: dns.name.Name object or string
+ @param rdclass: The zone's rdata class; the default is class IN.
+ @type rdclass: int
+ @param relativize: should names be relativized? The default is True
+ @type relativize: bool
+ @param zone_factory: The zone factory to use
+ @type zone_factory: function returning a Zone
+ @param filename: The filename to emit when describing where an error
+ occurred; the default is '', or the value of I{f} if I{f} is a
+ string.
+ @type filename: string
+ @param allow_include: is $INCLUDE allowed?
+ @type allow_include: bool
+ @param check_origin: should sanity checks of the origin node be done?
+ The default is True.
+ @type check_origin: bool
+ @raises dns.zone.NoSOA: No SOA RR was found at the zone origin
+ @raises dns.zone.NoNS: No NS RRset was found at the zone origin
+ @rtype: dns.zone.Zone object
+ """
+
+ str_type = string_types
+ if PY3:
+ opts = 'r'
+ else:
+ opts = 'rU'
+
+ if isinstance(f, str_type):
+ if filename is None:
+ filename = f
+ f = open(f, opts)
+ want_close = True
+ else:
+ if filename is None:
+ filename = ''
+ want_close = False
+
+ try:
+ z = from_text(f, origin, rdclass, relativize, zone_factory,
+ filename, allow_include, check_origin)
+ finally:
+ if want_close:
+ f.close()
+ return z
+
+
+def from_xfr(xfr, zone_factory=Zone, relativize=True, check_origin=True):
+ """Convert the output of a zone transfer generator into a zone object.
+
+ @param xfr: The xfr generator
+ @type xfr: generator of dns.message.Message objects
+ @param relativize: should names be relativized? The default is True.
+ It is essential that the relativize setting matches the one specified
+ to dns.query.xfr().
+ @type relativize: bool
+ @param check_origin: should sanity checks of the origin node be done?
+ The default is True.
+ @type check_origin: bool
+ @raises dns.zone.NoSOA: No SOA RR was found at the zone origin
+ @raises dns.zone.NoNS: No NS RRset was found at the zone origin
+ @rtype: dns.zone.Zone object
+ """
+
+ z = None
+ for r in xfr:
+ if z is None:
+ if relativize:
+ origin = r.origin
+ else:
+ origin = r.answer[0].name
+ rdclass = r.answer[0].rdclass
+ z = zone_factory(origin, rdclass, relativize=relativize)
+ for rrset in r.answer:
+ znode = z.nodes.get(rrset.name)
+ if not znode:
+ znode = z.node_factory()
+ z.nodes[rrset.name] = znode
+ zrds = znode.find_rdataset(rrset.rdclass, rrset.rdtype,
+ rrset.covers, True)
+ zrds.update_ttl(rrset.ttl)
+ for rd in rrset:
+ rd.choose_relativity(z.origin, relativize)
+ zrds.add(rd)
+ if check_origin:
+ z.check_origin()
+ return z
diff --git a/poetry.lock b/poetry.lock
index 767aeee500..41a1f636ec 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -80,7 +80,7 @@ python-dateutil = ">=2.7.0"
[[package]]
name = "astroid"
-version = "2.5.2"
+version = "2.5.3"
description = "An abstract syntax tree for Python with inference support."
category = "dev"
optional = false
@@ -272,15 +272,24 @@ test = ["pytest (>=6.0)", "pytest-cov", "pytest-subtests", "pytest-xdist", "pret
[[package]]
name = "cx-freeze"
-version = "6.5.3"
+version = "6.6"
description = "Create standalone executables from Python scripts"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
+cx-Logging = {version = ">=3.0", markers = "sys_platform == \"win32\""}
importlib-metadata = ">=3.1.1"
+[[package]]
+name = "cx-logging"
+version = "3.0"
+description = "Python and C interfaces for logging"
+category = "dev"
+optional = false
+python-versions = "*"
+
[[package]]
name = "dnspython"
version = "2.1.0"
@@ -298,12 +307,24 @@ trio = ["trio (>=0.14.0)", "sniffio (>=1.1)"]
[[package]]
name = "docutils"
-version = "0.17"
+version = "0.16"
description = "Docutils -- Python Documentation Utilities"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
+[[package]]
+name = "enlighten"
+version = "1.9.0"
+description = "Enlighten Progress Bar"
+category = "main"
+optional = false
+python-versions = "*"
+
+[package.dependencies]
+blessed = ">=1.17.7"
+prefixed = ">=0.3.2"
+
[[package]]
name = "evdev"
version = "1.4.0"
@@ -314,7 +335,7 @@ python-versions = "*"
[[package]]
name = "flake8"
-version = "3.9.0"
+version = "3.9.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
@@ -392,7 +413,7 @@ uritemplate = ">=3.0.0,<4dev"
[[package]]
name = "google-auth"
-version = "1.28.0"
+version = "1.29.0"
description = "Google Authentication Library"
category = "main"
optional = false
@@ -407,6 +428,7 @@ six = ">=1.9.0"
[package.extras]
aiohttp = ["aiohttp (>=3.6.2,<4.0.0dev)"]
pyopenssl = ["pyopenssl (>=20.0.0)"]
+reauth = ["pyu2f (>=0.1.5)"]
[[package]]
name = "google-auth-httplib2"
@@ -464,7 +486,7 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "importlib-metadata"
-version = "3.10.0"
+version = "4.0.0"
description = "Read metadata from Python packages"
category = "main"
optional = false
@@ -540,7 +562,7 @@ i18n = ["Babel (>=0.8)"]
[[package]]
name = "jinxed"
-version = "1.0.1"
+version = "1.1.0"
description = "Jinxed Terminal Library"
category = "main"
optional = false
@@ -704,9 +726,17 @@ importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
[package.extras]
dev = ["pre-commit", "tox"]
+[[package]]
+name = "prefixed"
+version = "0.3.2"
+description = "Prefixed alternative numeric library"
+category = "main"
+optional = false
+python-versions = "*"
+
[[package]]
name = "protobuf"
-version = "3.15.7"
+version = "3.15.8"
description = "Protocol Buffers"
category = "main"
optional = false
@@ -1120,7 +1150,7 @@ python-versions = "*"
[[package]]
name = "sphinx"
-version = "3.5.3"
+version = "3.5.4"
description = "Python documentation generator"
category = "dev"
optional = false
@@ -1130,7 +1160,7 @@ python-versions = ">=3.5"
alabaster = ">=0.7,<0.8"
babel = ">=1.3"
colorama = {version = ">=0.3.5", markers = "sys_platform == \"win32\""}
-docutils = ">=0.12"
+docutils = ">=0.12,<0.17"
imagesize = "*"
Jinja2 = ">=2.3"
packaging = "*"
@@ -1163,13 +1193,14 @@ sphinx = "*"
[[package]]
name = "sphinx-rtd-theme"
-version = "0.5.1"
+version = "0.5.2"
description = "Read the Docs theme for Sphinx"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
+docutils = "<0.17"
sphinx = "*"
[package.extras]
@@ -1277,22 +1308,9 @@ category = "dev"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
-[[package]]
-name = "tqdm"
-version = "4.60.0"
-description = "Fast, Extensible Progress Meter"
-category = "dev"
-optional = false
-python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
-
-[package.extras]
-dev = ["py-make (>=0.1.0)", "twine", "wheel"]
-notebook = ["ipywidgets (>=6)"]
-telegram = ["requests"]
-
[[package]]
name = "typed-ast"
-version = "1.4.2"
+version = "1.4.3"
description = "a fork of Python 2 and 3 ast modules with type comment support"
category = "dev"
optional = false
@@ -1399,7 +1417,7 @@ testing = ["pytest (>=4.6)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pyt
[metadata]
lock-version = "1.1"
python-versions = "3.7.*"
-content-hash = "a8c9915ce3096b74b9328a632911a759780844d368fa1d6d0fbd7c5d7d4536cf"
+content-hash = "80fde42aade7fc90bb68d85f0d9b3feb27fc3744d72eb5af6a11b6c9d9836aca"
[metadata.files]
acre = []
@@ -1463,8 +1481,8 @@ arrow = [
{file = "arrow-0.17.0.tar.gz", hash = "sha256:ff08d10cda1d36c68657d6ad20d74fbea493d980f8b2d45344e00d6ed2bf6ed4"},
]
astroid = [
- {file = "astroid-2.5.2-py3-none-any.whl", hash = "sha256:cd80bf957c49765dce6d92c43163ff9d2abc43132ce64d4b1b47717c6d2522df"},
- {file = "astroid-2.5.2.tar.gz", hash = "sha256:6b0ed1af831570e500e2437625979eaa3b36011f66ddfc4ce930128610258ca9"},
+ {file = "astroid-2.5.3-py3-none-any.whl", hash = "sha256:bea3f32799fbb8581f58431c12591bc20ce11cbc90ad82e2ea5717d94f2080d5"},
+ {file = "astroid-2.5.3.tar.gz", hash = "sha256:ad63b8552c70939568966811a088ef0bc880f99a24a00834abd0e3681b514f91"},
]
async-timeout = [
{file = "async-timeout-3.0.1.tar.gz", hash = "sha256:0c3c816a028d47f659d6ff5c745cb2acf1f966da1fe5c19c77a70282b25f4c5f"},
@@ -1630,30 +1648,49 @@ cryptography = [
{file = "cryptography-3.4.7.tar.gz", hash = "sha256:3d10de8116d25649631977cb37da6cbdd2d6fa0e0281d014a5b7d337255ca713"},
]
cx-freeze = [
- {file = "cx_Freeze-6.5.3-cp36-cp36m-win32.whl", hash = "sha256:0a1babae574546b622303da53e1a9829aa3a7e53e62b41eb260250220f83164b"},
- {file = "cx_Freeze-6.5.3-cp36-cp36m-win_amd64.whl", hash = "sha256:2671e46cd491c181c632df3f0df2847bad7066897faa07eb1d50f60f5082596f"},
- {file = "cx_Freeze-6.5.3-cp37-cp37m-win32.whl", hash = "sha256:abf5f95f914573cdff5bd9845144977b875fc655417d0e66f935865af1de64d5"},
- {file = "cx_Freeze-6.5.3-cp37-cp37m-win_amd64.whl", hash = "sha256:65c4560bc7b18e2a7bbe3546313cbc01d3fca244d199b39508cfa2ae561887ce"},
- {file = "cx_Freeze-6.5.3-cp38-cp38-win32.whl", hash = "sha256:7e2592fe1b65bd45c729934b391579fde5aed6b4c9e3e4d990738fc7fec718ea"},
- {file = "cx_Freeze-6.5.3-cp38-cp38-win_amd64.whl", hash = "sha256:d3bb71349dace28e545eb1e4549255f0dd915f925f8505b1a342b3d2fbd4734b"},
- {file = "cx_Freeze-6.5.3-cp39-cp39-win32.whl", hash = "sha256:df3872d8e8f87a3f89e6758bed130b5b95ee7473054e2a7eee5b1a8d1c4ecf9e"},
- {file = "cx_Freeze-6.5.3-cp39-cp39-win_amd64.whl", hash = "sha256:507bbaace2fd27edb0e6b024898ab2e4831d45d7238264f578a5e4fa70f065e5"},
- {file = "cx_Freeze-6.5.3.tar.gz", hash = "sha256:e0d03cabcdf9b9c21354807ed9f06fa9481a8fd5a0838968a830f01a70820ff1"},
+ {file = "cx_Freeze-6.6-cp36-cp36m-win32.whl", hash = "sha256:b3d3a6bcd1a07c50b4e1c907f14842642156110e63a99cd5c73b8a24751e9b97"},
+ {file = "cx_Freeze-6.6-cp36-cp36m-win_amd64.whl", hash = "sha256:1935266ec644ea4f7e584985f44cefc0622a449a09980d990833a1a2afcadac8"},
+ {file = "cx_Freeze-6.6-cp37-cp37m-win32.whl", hash = "sha256:1eac2b0f254319cc641ce25bd83337effd7936092562fde701f3ffb40e0274ec"},
+ {file = "cx_Freeze-6.6-cp37-cp37m-win_amd64.whl", hash = "sha256:2bc46ef6d510811b6002f34a3ae4cbfdea44e18644febd2a404d3ee8e48a9fc4"},
+ {file = "cx_Freeze-6.6-cp38-cp38-win32.whl", hash = "sha256:46eb50ebc46f7ae236d16c6a52671ab0f7bb479bea668da19f4b6de3cc413e9e"},
+ {file = "cx_Freeze-6.6-cp38-cp38-win_amd64.whl", hash = "sha256:8c3b00476ce385bb58595bffce55aed031e5a6e16ab6e14d8bee9d1d569e46c3"},
+ {file = "cx_Freeze-6.6-cp39-cp39-win32.whl", hash = "sha256:6e9340cbcf52d4836980ecc83ddba4f7704ff6654dd41168c146b74f512977ce"},
+ {file = "cx_Freeze-6.6-cp39-cp39-win_amd64.whl", hash = "sha256:2fcf1c8b77ae5c06f45be3a9aff79e1dd808c0d624e97561f840dec5ea9b214a"},
+ {file = "cx_Freeze-6.6.tar.gz", hash = "sha256:c4af8ad3f7e7d71e291c1dec5d0fb26bbe92df834b098ed35434c901fbd6762f"},
+]
+cx-logging = [
+ {file = "cx_Logging-3.0-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:9fcd297e5c51470521c47eff0f86ba844aeca6be97e13c3e2114ebdf03fa3c96"},
+ {file = "cx_Logging-3.0-cp36-cp36m-win32.whl", hash = "sha256:0df4be47c5022cc54316949e283403214568ef599817ced0c0972183d6d4fabb"},
+ {file = "cx_Logging-3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:203ca92ee7c15d5dfe1fcdfcef7b39d0123eba5c6d8c2388b6e7db6b961a5362"},
+ {file = "cx_Logging-3.0-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:20daa71b2a30f61d09bcf55dbda002c10f0c7c691f53cb393fc6485410fa2484"},
+ {file = "cx_Logging-3.0-cp37-cp37m-win32.whl", hash = "sha256:5be5f905e8d34a3326e28d428674cdc2d57912fdf6e25b8676d63f76294eb4e0"},
+ {file = "cx_Logging-3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:04e4b61e2636dc8ae135937655af6626362aefc7f6175e86888a244b61001823"},
+ {file = "cx_Logging-3.0-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:1bf0ebc79a7baa331c7deaf57088c234b82710286dfad453ff0c55eee0122b72"},
+ {file = "cx_Logging-3.0-cp38-cp38-win32.whl", hash = "sha256:d98a59a47e99fa430b3f6d2a979e27509852d2c43e204f43bd0168e7ec97f469"},
+ {file = "cx_Logging-3.0-cp38-cp38-win_amd64.whl", hash = "sha256:bb2e91019e5905415f795eef994de60ace5ae186fc4fe3d358e2d8feebb24992"},
+ {file = "cx_Logging-3.0-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:b6f4a9b750e02a180517f779d174a1c7db651981cd37e5623235b87da9774dfd"},
+ {file = "cx_Logging-3.0-cp39-cp39-win32.whl", hash = "sha256:e7cca28e8ee4082654b6062cc4d06f83d48f1a7e2d152bab020c9e3e373afb90"},
+ {file = "cx_Logging-3.0-cp39-cp39-win_amd64.whl", hash = "sha256:302e9c4f65a936c288a4fa59a90e7e142d9ef994aa29676731acafdcccdbb3f5"},
+ {file = "cx_Logging-3.0.tar.gz", hash = "sha256:ba8a7465facf7b98d8f494030fb481a2e8aeee29dc191e10383bb54ed42bdb34"},
]
dnspython = [
{file = "dnspython-2.1.0-py3-none-any.whl", hash = "sha256:95d12f6ef0317118d2a1a6fc49aac65ffec7eb8087474158f42f26a639135216"},
{file = "dnspython-2.1.0.zip", hash = "sha256:e4a87f0b573201a0f3727fa18a516b055fd1107e0e5477cded4a2de497df1dd4"},
]
docutils = [
- {file = "docutils-0.17-py2.py3-none-any.whl", hash = "sha256:a71042bb7207c03d5647f280427f14bfbd1a65c9eb84f4b341d85fafb6bb4bdf"},
- {file = "docutils-0.17.tar.gz", hash = "sha256:e2ffeea817964356ba4470efba7c2f42b6b0de0b04e66378507e3e2504bbff4c"},
+ {file = "docutils-0.16-py2.py3-none-any.whl", hash = "sha256:0c5b78adfbf7762415433f5515cd5c9e762339e23369dbe8000d84a4bf4ab3af"},
+ {file = "docutils-0.16.tar.gz", hash = "sha256:c2de3a60e9e7d07be26b7f2b00ca0309c207e06c100f9cc2a94931fc75a478fc"},
+]
+enlighten = [
+ {file = "enlighten-1.9.0-py2.py3-none-any.whl", hash = "sha256:5c59e41505702243c6b26437403e371d2a146ac72de5f706376f738ea8f32659"},
+ {file = "enlighten-1.9.0.tar.gz", hash = "sha256:539cc308ccc0c3bfb50feb1b2da94c1a1ac21e80fe95e984221de8966d48f428"},
]
evdev = [
{file = "evdev-1.4.0.tar.gz", hash = "sha256:8782740eb1a86b187334c07feb5127d3faa0b236e113206dfe3ae8f77fb1aaf1"},
]
flake8 = [
- {file = "flake8-3.9.0-py2.py3-none-any.whl", hash = "sha256:12d05ab02614b6aee8df7c36b97d1a3b2372761222b19b58621355e82acddcff"},
- {file = "flake8-3.9.0.tar.gz", hash = "sha256:78873e372b12b093da7b5e5ed302e8ad9e988b38b063b61ad937f26ca58fc5f0"},
+ {file = "flake8-3.9.1-py2.py3-none-any.whl", hash = "sha256:3b9f848952dddccf635be78098ca75010f073bfe14d2c6bda867154bea728d2a"},
+ {file = "flake8-3.9.1.tar.gz", hash = "sha256:1aa8990be1e689d96c745c5682b687ea49f2e05a443aff1f8251092b0014e378"},
]
ftrack-python-api = [
{file = "ftrack-python-api-2.0.0.tar.gz", hash = "sha256:dd6f02c31daf5a10078196dc9eac4671e4297c762fbbf4df98de668ac12281d9"},
@@ -1671,8 +1708,8 @@ google-api-python-client = [
{file = "google_api_python_client-1.12.8-py2.py3-none-any.whl", hash = "sha256:3c4c4ca46b5c21196bec7ee93453443e477d82cbfa79234d1ce0645f81170eaf"},
]
google-auth = [
- {file = "google-auth-1.28.0.tar.gz", hash = "sha256:9bd436d19ab047001a1340720d2b629eb96dd503258c524921ec2af3ee88a80e"},
- {file = "google_auth-1.28.0-py2.py3-none-any.whl", hash = "sha256:dcaba3aa9d4e0e96fd945bf25a86b6f878fcb05770b67adbeb50a63ca4d28a5e"},
+ {file = "google-auth-1.29.0.tar.gz", hash = "sha256:010f011c4e27d3d5eb01106fba6aac39d164842dfcd8709955c4638f5b11ccf8"},
+ {file = "google_auth-1.29.0-py2.py3-none-any.whl", hash = "sha256:f30a672a64d91cc2e3137765d088c5deec26416246f7a9e956eaf69a8d7ed49c"},
]
google-auth-httplib2 = [
{file = "google-auth-httplib2-0.1.0.tar.gz", hash = "sha256:a07c39fd632becacd3f07718dfd6021bf396978f03ad3ce4321d060015cc30ac"},
@@ -1695,8 +1732,8 @@ imagesize = [
{file = "imagesize-1.2.0.tar.gz", hash = "sha256:b1f6b5a4eab1f73479a50fb79fcf729514a900c341d8503d62a62dbc4127a2b1"},
]
importlib-metadata = [
- {file = "importlib_metadata-3.10.0-py3-none-any.whl", hash = "sha256:d2d46ef77ffc85cbf7dac7e81dd663fde71c45326131bea8033b9bad42268ebe"},
- {file = "importlib_metadata-3.10.0.tar.gz", hash = "sha256:c9db46394197244adf2f0b08ec5bc3cf16757e9590b02af1fca085c16c0d600a"},
+ {file = "importlib_metadata-4.0.0-py3-none-any.whl", hash = "sha256:19192b88d959336bfa6bdaaaef99aeafec179eca19c47c804e555703ee5f07ef"},
+ {file = "importlib_metadata-4.0.0.tar.gz", hash = "sha256:2e881981c9748d7282b374b68e759c87745c25427b67ecf0cc67fb6637a1bff9"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
@@ -1719,8 +1756,8 @@ jinja2 = [
{file = "Jinja2-2.11.3.tar.gz", hash = "sha256:a6d58433de0ae800347cab1fa3043cebbabe8baa9d29e668f1c768cb87a333c6"},
]
jinxed = [
- {file = "jinxed-1.0.1-py2.py3-none-any.whl", hash = "sha256:602f2cb3523c1045456f7b6d79ac19297fd8e933ae3bd9159845dc857f2d519c"},
- {file = "jinxed-1.0.1.tar.gz", hash = "sha256:bc523c74fe676c99ccc69c68c2dcd7d4d2d7b2541f6dbef74ef211aedd8ad0d3"},
+ {file = "jinxed-1.1.0-py2.py3-none-any.whl", hash = "sha256:6a61ccf963c16aa885304f27e6e5693783676897cea0c7f223270c8b8e78baf8"},
+ {file = "jinxed-1.1.0.tar.gz", hash = "sha256:d8f1731f134e9e6b04d95095845ae6c10eb15cb223a5f0cabdea87d4a279c305"},
]
jsonschema = [
{file = "jsonschema-3.2.0-py2.py3-none-any.whl", hash = "sha256:4e5b3cf8216f577bee9ce139cbe72eca3ea4f292ec60928ff24758ce626cd163"},
@@ -1906,27 +1943,31 @@ pluggy = [
{file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"},
{file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"},
]
+prefixed = [
+ {file = "prefixed-0.3.2-py2.py3-none-any.whl", hash = "sha256:5e107306462d63f2f03c529dbf11b0026fdfec621a9a008ca639d71de22995c3"},
+ {file = "prefixed-0.3.2.tar.gz", hash = "sha256:ca48277ba5fa8346dd4b760847da930c7b84416387c39e93affef086add2c029"},
+]
protobuf = [
- {file = "protobuf-3.15.7-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a14141d5c967362d2eedff8825d2b69cc36a5b3ed6b1f618557a04e58a3cf787"},
- {file = "protobuf-3.15.7-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:d54d78f621852ec4fdd1484d1263ca04d4bf5ffdf7abffdbb939e444b6ff3385"},
- {file = "protobuf-3.15.7-cp35-cp35m-macosx_10_9_intel.whl", hash = "sha256:462085acdb410b06335315fe7e63cb281a1902856e0f4657f341c283cedc1d56"},
- {file = "protobuf-3.15.7-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:849c92ce112e1ef648705c29ce044248e350f71d9d54a2026830623198f0bd38"},
- {file = "protobuf-3.15.7-cp35-cp35m-win32.whl", hash = "sha256:1f6083382f7714700deadf3014e921711e2f807de7f27e40c32b744701ae5b99"},
- {file = "protobuf-3.15.7-cp35-cp35m-win_amd64.whl", hash = "sha256:e17f60f00081adcb32068ee0bb51e418f6474acf83424244ff3512ffd2166385"},
- {file = "protobuf-3.15.7-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:6c75e563c6fb2ca5b8f21dd75c15659aa2c4a0025b9da3a7711ae661cd6a488d"},
- {file = "protobuf-3.15.7-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:d939f41b4108350841c4790ebbadb61729e1363522fdb8434eb4e6f2065d0db1"},
- {file = "protobuf-3.15.7-cp36-cp36m-win32.whl", hash = "sha256:24f14c09d4c0a3641f1b0e9b552d026361de65b01686fdd3e5fdf8f9512cd79b"},
- {file = "protobuf-3.15.7-cp36-cp36m-win_amd64.whl", hash = "sha256:1247170191bcb2a8d978d11a58afe391004ec6c2184e4d961baf8102d43ff500"},
- {file = "protobuf-3.15.7-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:364cadaeec0756afdc099cbd88cb5659bd1bb7d547168d063abcb0272ccbb2f6"},
- {file = "protobuf-3.15.7-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:0c3a6941b1e6e6e22d812a8e5c46bfe83082ea60d262a46f2cfb22d9b9fb17db"},
- {file = "protobuf-3.15.7-cp37-cp37m-win32.whl", hash = "sha256:eb5668f3f6a83b6603ca2e09be5b20de89521ea5914aabe032cce981e4129cc8"},
- {file = "protobuf-3.15.7-cp37-cp37m-win_amd64.whl", hash = "sha256:1001e671cf8476edce7fb72778358d026390649cc35a79d47b2a291684ccfbb2"},
- {file = "protobuf-3.15.7-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a5ba7dd6f97964655aa7b234c95d80886425a31b7010764f042cdeb985314d18"},
- {file = "protobuf-3.15.7-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:46674bd6fcf8c63b4b9869ba579685db67cf51ae966443dd6bd9a8fa00fcef62"},
- {file = "protobuf-3.15.7-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4c4399156fb27e3768313b7a59352c861a893252bda6fb9f3643beb3ebb7047e"},
- {file = "protobuf-3.15.7-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:85cd29faf056036167d87445d5a5059034c298881c044e71a73d3b61a4be1c23"},
- {file = "protobuf-3.15.7-py2.py3-none-any.whl", hash = "sha256:22054432b923c0086f9cf1e1c0c52d39bf3c6e31014ea42eec2dabc22ee26d78"},
- {file = "protobuf-3.15.7.tar.gz", hash = "sha256:2d03fc2591543cd2456d0b72230b50c4519546a8d379ac6fd3ecd84c6df61e5d"},
+ {file = "protobuf-3.15.8-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:fad4f971ec38d8df7f4b632c819bf9bbf4f57cfd7312cf526c69ce17ef32436a"},
+ {file = "protobuf-3.15.8-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:f17b352d7ce33c81773cf81d536ca70849de6f73c96413f17309f4b43ae7040b"},
+ {file = "protobuf-3.15.8-cp35-cp35m-macosx_10_9_intel.whl", hash = "sha256:4a054b0b5900b7ea7014099e783fb8c4618e4209fffcd6050857517b3f156e18"},
+ {file = "protobuf-3.15.8-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:efa4c4d4fc9ba734e5e85eaced70e1b63fb3c8d08482d839eb838566346f1737"},
+ {file = "protobuf-3.15.8-cp35-cp35m-win32.whl", hash = "sha256:07eec4e2ccbc74e95bb9b3afe7da67957947ee95bdac2b2e91b038b832dd71f0"},
+ {file = "protobuf-3.15.8-cp35-cp35m-win_amd64.whl", hash = "sha256:f9cadaaa4065d5dd4d15245c3b68b967b3652a3108e77f292b58b8c35114b56c"},
+ {file = "protobuf-3.15.8-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:2dc0e8a9e4962207bdc46a365b63a3f1aca6f9681a5082a326c5837ef8f4b745"},
+ {file = "protobuf-3.15.8-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f80afc0a0ba13339bbab25ca0409e9e2836b12bb012364c06e97c2df250c3343"},
+ {file = "protobuf-3.15.8-cp36-cp36m-win32.whl", hash = "sha256:c5566f956a26cda3abdfacc0ca2e21db6c9f3d18f47d8d4751f2209d6c1a5297"},
+ {file = "protobuf-3.15.8-cp36-cp36m-win_amd64.whl", hash = "sha256:dab75b56a12b1ceb3e40808b5bd9dfdaef3a1330251956e6744e5b6ed8f8830b"},
+ {file = "protobuf-3.15.8-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3053f13207e7f13dc7be5e9071b59b02020172f09f648e85dc77e3fcb50d1044"},
+ {file = "protobuf-3.15.8-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:1f0b5d156c3df08cc54bc2c8b8b875648ea4cd7ebb2a9a130669f7547ec3488c"},
+ {file = "protobuf-3.15.8-cp37-cp37m-win32.whl", hash = "sha256:90270fe5732c1f1ff664a3bd7123a16456d69b4e66a09a139a00443a32f210b8"},
+ {file = "protobuf-3.15.8-cp37-cp37m-win_amd64.whl", hash = "sha256:f42c2f5fb67da5905bfc03733a311f72fa309252bcd77c32d1462a1ad519521e"},
+ {file = "protobuf-3.15.8-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f6077db37bfa16494dca58a4a02bfdacd87662247ad6bc1f7f8d13ff3f0013e1"},
+ {file = "protobuf-3.15.8-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:510e66491f1a5ac5953c908aa8300ec47f793130097e4557482803b187a8ee05"},
+ {file = "protobuf-3.15.8-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5ff9fa0e67fcab442af9bc8d4ec3f82cb2ff3be0af62dba047ed4187f0088b7d"},
+ {file = "protobuf-3.15.8-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:1c0e9e56202b9dccbc094353285a252e2b7940b74fdf75f1b4e1b137833fabd7"},
+ {file = "protobuf-3.15.8-py2.py3-none-any.whl", hash = "sha256:a0a08c6b2e6d6c74a6eb5bf6184968eefb1569279e78714e239d33126e753403"},
+ {file = "protobuf-3.15.8.tar.gz", hash = "sha256:0277f62b1e42210cafe79a71628c1d553348da81cbd553402a7f7549c50b11d0"},
]
py = [
{file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"},
@@ -2208,16 +2249,16 @@ speedcopy = [
{file = "speedcopy-2.1.0.tar.gz", hash = "sha256:8bb1a6c735900b83901a7be84ba2175ed3887c13c6786f97dea48f2ea7d504c2"},
]
sphinx = [
- {file = "Sphinx-3.5.3-py3-none-any.whl", hash = "sha256:3f01732296465648da43dec8fb40dc451ba79eb3e2cc5c6d79005fd98197107d"},
- {file = "Sphinx-3.5.3.tar.gz", hash = "sha256:ce9c228456131bab09a3d7d10ae58474de562a6f79abb3dc811ae401cf8c1abc"},
+ {file = "Sphinx-3.5.4-py3-none-any.whl", hash = "sha256:2320d4e994a191f4b4be27da514e46b3d6b420f2ff895d064f52415d342461e8"},
+ {file = "Sphinx-3.5.4.tar.gz", hash = "sha256:19010b7b9fa0dc7756a6e105b2aacd3a80f798af3c25c273be64d7beeb482cb1"},
]
sphinx-qt-documentation = [
{file = "sphinx_qt_documentation-0.3-py3-none-any.whl", hash = "sha256:bee247cb9e4fc03fc496d07adfdb943100e1103320c3e5e820e0cfa7c790d9b6"},
{file = "sphinx_qt_documentation-0.3.tar.gz", hash = "sha256:f09a0c9d9e989172ba3e282b92bf55613bb23ad47315ec5b0d38536b343ac6c8"},
]
sphinx-rtd-theme = [
- {file = "sphinx_rtd_theme-0.5.1-py2.py3-none-any.whl", hash = "sha256:fa6bebd5ab9a73da8e102509a86f3fcc36dec04a0b52ea80e5a033b2aba00113"},
- {file = "sphinx_rtd_theme-0.5.1.tar.gz", hash = "sha256:eda689eda0c7301a80cf122dad28b1861e5605cbf455558f3775e1e8200e83a5"},
+ {file = "sphinx_rtd_theme-0.5.2-py2.py3-none-any.whl", hash = "sha256:4a05bdbe8b1446d77a01e20a23ebc6777c74f43237035e76be89699308987d6f"},
+ {file = "sphinx_rtd_theme-0.5.2.tar.gz", hash = "sha256:32bd3b5d13dc8186d7a42fc816a23d32e83a4827d7d9882948e7b837c232da5a"},
]
sphinxcontrib-applehelp = [
{file = "sphinxcontrib-applehelp-1.0.2.tar.gz", hash = "sha256:a072735ec80e7675e3f432fcae8610ecf509c5f1869d17e2eecff44389cdbc58"},
@@ -2254,41 +2295,37 @@ toml = [
{file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
{file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
]
-tqdm = [
- {file = "tqdm-4.60.0-py2.py3-none-any.whl", hash = "sha256:daec693491c52e9498632dfbe9ccfc4882a557f5fa08982db1b4d3adbe0887c3"},
- {file = "tqdm-4.60.0.tar.gz", hash = "sha256:ebdebdb95e3477ceea267decfc0784859aa3df3e27e22d23b83e9b272bf157ae"},
-]
typed-ast = [
- {file = "typed_ast-1.4.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:7703620125e4fb79b64aa52427ec192822e9f45d37d4b6625ab37ef403e1df70"},
- {file = "typed_ast-1.4.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c9aadc4924d4b5799112837b226160428524a9a45f830e0d0f184b19e4090487"},
- {file = "typed_ast-1.4.2-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:9ec45db0c766f196ae629e509f059ff05fc3148f9ffd28f3cfe75d4afb485412"},
- {file = "typed_ast-1.4.2-cp35-cp35m-win32.whl", hash = "sha256:85f95aa97a35bdb2f2f7d10ec5bbdac0aeb9dafdaf88e17492da0504de2e6400"},
- {file = "typed_ast-1.4.2-cp35-cp35m-win_amd64.whl", hash = "sha256:9044ef2df88d7f33692ae3f18d3be63dec69c4fb1b5a4a9ac950f9b4ba571606"},
- {file = "typed_ast-1.4.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c1c876fd795b36126f773db9cbb393f19808edd2637e00fd6caba0e25f2c7b64"},
- {file = "typed_ast-1.4.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:5dcfc2e264bd8a1db8b11a892bd1647154ce03eeba94b461effe68790d8b8e07"},
- {file = "typed_ast-1.4.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8db0e856712f79c45956da0c9a40ca4246abc3485ae0d7ecc86a20f5e4c09abc"},
- {file = "typed_ast-1.4.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:d003156bb6a59cda9050e983441b7fa2487f7800d76bdc065566b7d728b4581a"},
- {file = "typed_ast-1.4.2-cp36-cp36m-win32.whl", hash = "sha256:4c790331247081ea7c632a76d5b2a265e6d325ecd3179d06e9cf8d46d90dd151"},
- {file = "typed_ast-1.4.2-cp36-cp36m-win_amd64.whl", hash = "sha256:d175297e9533d8d37437abc14e8a83cbc68af93cc9c1c59c2c292ec59a0697a3"},
- {file = "typed_ast-1.4.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cf54cfa843f297991b7388c281cb3855d911137223c6b6d2dd82a47ae5125a41"},
- {file = "typed_ast-1.4.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:b4fcdcfa302538f70929eb7b392f536a237cbe2ed9cba88e3bf5027b39f5f77f"},
- {file = "typed_ast-1.4.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:987f15737aba2ab5f3928c617ccf1ce412e2e321c77ab16ca5a293e7bbffd581"},
- {file = "typed_ast-1.4.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:37f48d46d733d57cc70fd5f30572d11ab8ed92da6e6b28e024e4a3edfb456e37"},
- {file = "typed_ast-1.4.2-cp37-cp37m-win32.whl", hash = "sha256:36d829b31ab67d6fcb30e185ec996e1f72b892255a745d3a82138c97d21ed1cd"},
- {file = "typed_ast-1.4.2-cp37-cp37m-win_amd64.whl", hash = "sha256:8368f83e93c7156ccd40e49a783a6a6850ca25b556c0fa0240ed0f659d2fe496"},
- {file = "typed_ast-1.4.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:963c80b583b0661918718b095e02303d8078950b26cc00b5e5ea9ababe0de1fc"},
- {file = "typed_ast-1.4.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e683e409e5c45d5c9082dc1daf13f6374300806240719f95dc783d1fc942af10"},
- {file = "typed_ast-1.4.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:84aa6223d71012c68d577c83f4e7db50d11d6b1399a9c779046d75e24bed74ea"},
- {file = "typed_ast-1.4.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:a38878a223bdd37c9709d07cd357bb79f4c760b29210e14ad0fb395294583787"},
- {file = "typed_ast-1.4.2-cp38-cp38-win32.whl", hash = "sha256:a2c927c49f2029291fbabd673d51a2180038f8cd5a5b2f290f78c4516be48be2"},
- {file = "typed_ast-1.4.2-cp38-cp38-win_amd64.whl", hash = "sha256:c0c74e5579af4b977c8b932f40a5464764b2f86681327410aa028a22d2f54937"},
- {file = "typed_ast-1.4.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07d49388d5bf7e863f7fa2f124b1b1d89d8aa0e2f7812faff0a5658c01c59aa1"},
- {file = "typed_ast-1.4.2-cp39-cp39-manylinux1_i686.whl", hash = "sha256:240296b27397e4e37874abb1df2a608a92df85cf3e2a04d0d4d61055c8305ba6"},
- {file = "typed_ast-1.4.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:d746a437cdbca200622385305aedd9aef68e8a645e385cc483bdc5e488f07166"},
- {file = "typed_ast-1.4.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:14bf1522cdee369e8f5581238edac09150c765ec1cb33615855889cf33dcb92d"},
- {file = "typed_ast-1.4.2-cp39-cp39-win32.whl", hash = "sha256:cc7b98bf58167b7f2db91a4327da24fb93368838eb84a44c472283778fc2446b"},
- {file = "typed_ast-1.4.2-cp39-cp39-win_amd64.whl", hash = "sha256:7147e2a76c75f0f64c4319886e7639e490fee87c9d25cb1d4faef1d8cf83a440"},
- {file = "typed_ast-1.4.2.tar.gz", hash = "sha256:9fc0b3cb5d1720e7141d103cf4819aea239f7d136acf9ee4a69b047b7986175a"},
+ {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:2068531575a125b87a41802130fa7e29f26c09a2833fea68d9a40cf33902eba6"},
+ {file = "typed_ast-1.4.3-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c907f561b1e83e93fad565bac5ba9c22d96a54e7ea0267c708bffe863cbe4075"},
+ {file = "typed_ast-1.4.3-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:1b3ead4a96c9101bef08f9f7d1217c096f31667617b58de957f690c92378b528"},
+ {file = "typed_ast-1.4.3-cp35-cp35m-win32.whl", hash = "sha256:dde816ca9dac1d9c01dd504ea5967821606f02e510438120091b84e852367428"},
+ {file = "typed_ast-1.4.3-cp35-cp35m-win_amd64.whl", hash = "sha256:777a26c84bea6cd934422ac2e3b78863a37017618b6e5c08f92ef69853e765d3"},
+ {file = "typed_ast-1.4.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f8afcf15cc511ada719a88e013cec87c11aff7b91f019295eb4530f96fe5ef2f"},
+ {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:52b1eb8c83f178ab787f3a4283f68258525f8d70f778a2f6dd54d3b5e5fb4341"},
+ {file = "typed_ast-1.4.3-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:01ae5f73431d21eead5015997ab41afa53aa1fbe252f9da060be5dad2c730ace"},
+ {file = "typed_ast-1.4.3-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:c190f0899e9f9f8b6b7863debfb739abcb21a5c054f911ca3596d12b8a4c4c7f"},
+ {file = "typed_ast-1.4.3-cp36-cp36m-win32.whl", hash = "sha256:398e44cd480f4d2b7ee8d98385ca104e35c81525dd98c519acff1b79bdaac363"},
+ {file = "typed_ast-1.4.3-cp36-cp36m-win_amd64.whl", hash = "sha256:bff6ad71c81b3bba8fa35f0f1921fb24ff4476235a6e94a26ada2e54370e6da7"},
+ {file = "typed_ast-1.4.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0fb71b8c643187d7492c1f8352f2c15b4c4af3f6338f21681d3681b3dc31a266"},
+ {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:760ad187b1041a154f0e4d0f6aae3e40fdb51d6de16e5c99aedadd9246450e9e"},
+ {file = "typed_ast-1.4.3-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:5feca99c17af94057417d744607b82dd0a664fd5e4ca98061480fd8b14b18d04"},
+ {file = "typed_ast-1.4.3-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:95431a26309a21874005845c21118c83991c63ea800dd44843e42a916aec5899"},
+ {file = "typed_ast-1.4.3-cp37-cp37m-win32.whl", hash = "sha256:aee0c1256be6c07bd3e1263ff920c325b59849dc95392a05f258bb9b259cf39c"},
+ {file = "typed_ast-1.4.3-cp37-cp37m-win_amd64.whl", hash = "sha256:9ad2c92ec681e02baf81fdfa056fe0d818645efa9af1f1cd5fd6f1bd2bdfd805"},
+ {file = "typed_ast-1.4.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b36b4f3920103a25e1d5d024d155c504080959582b928e91cb608a65c3a49e1a"},
+ {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_i686.whl", hash = "sha256:067a74454df670dcaa4e59349a2e5c81e567d8d65458d480a5b3dfecec08c5ff"},
+ {file = "typed_ast-1.4.3-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7538e495704e2ccda9b234b82423a4038f324f3a10c43bc088a1636180f11a41"},
+ {file = "typed_ast-1.4.3-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:af3d4a73793725138d6b334d9d247ce7e5f084d96284ed23f22ee626a7b88e39"},
+ {file = "typed_ast-1.4.3-cp38-cp38-win32.whl", hash = "sha256:f2362f3cb0f3172c42938946dbc5b7843c2a28aec307c49100c8b38764eb6927"},
+ {file = "typed_ast-1.4.3-cp38-cp38-win_amd64.whl", hash = "sha256:dd4a21253f42b8d2b48410cb31fe501d32f8b9fbeb1f55063ad102fe9c425e40"},
+ {file = "typed_ast-1.4.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f328adcfebed9f11301eaedfa48e15bdece9b519fb27e6a8c01aa52a17ec31b3"},
+ {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_i686.whl", hash = "sha256:2c726c276d09fc5c414693a2de063f521052d9ea7c240ce553316f70656c84d4"},
+ {file = "typed_ast-1.4.3-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:cae53c389825d3b46fb37538441f75d6aecc4174f615d048321b716df2757fb0"},
+ {file = "typed_ast-1.4.3-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:b9574c6f03f685070d859e75c7f9eeca02d6933273b5e69572e5ff9d5e3931c3"},
+ {file = "typed_ast-1.4.3-cp39-cp39-win32.whl", hash = "sha256:209596a4ec71d990d71d5e0d312ac935d86930e6eecff6ccc7007fe54d703808"},
+ {file = "typed_ast-1.4.3-cp39-cp39-win_amd64.whl", hash = "sha256:9c6d1a54552b5330bc657b7ef0eae25d00ba7ffe85d9ea8ae6540d2197a3788c"},
+ {file = "typed_ast-1.4.3.tar.gz", hash = "sha256:fb1bbeac803adea29cedd70781399c99138358c26d05fcbd23c13016b7f5ec65"},
]
typing-extensions = [
{file = "typing_extensions-3.7.4.3-py2-none-any.whl", hash = "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f"},
diff --git a/pyproject.toml b/pyproject.toml
index c8c0d5b4ff..88c977cd99 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,7 +1,7 @@
[tool.poetry]
name = "OpenPype"
version = "3.0.0-beta2"
-description = "Multi-platform open-source pipeline built around the Avalon platform, expanding it with extra features and integrations."
+description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team "]
license = "MIT License"
homepage = "https://openpype.io"
@@ -43,12 +43,13 @@ jinxed = [
{ version = "^1.0.1", markers = "sys_platform == 'linux'" }
]
python3-xlib = { version="*", markers = "sys_platform == 'linux'"}
+enlighten = "^1.9.0"
[tool.poetry.dev-dependencies]
flake8 = "^3.7"
autopep8 = "^1.4"
coverage = "*"
-cx_freeze = "^6.5"
+cx_freeze = "^6.6"
jedi = "^0.13"
Jinja2 = "^2.11"
pycodestyle = "^2.5.0"
@@ -62,8 +63,8 @@ sphinx-rtd-theme = "*"
sphinxcontrib-websupport = "*"
sphinx-qt-documentation = "*"
recommonmark = "*"
-tqdm = "*"
wheel = "*"
+enlighten = "*" # cool terminal progress bars
[tool.poetry.urls]
"Bug Tracker" = "https://github.com/pypeclub/openpype/issues"
@@ -76,3 +77,29 @@ url = "https://distribute.openpype.io/wheels/"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
+
+[openpype]
+
+[openpype.thirdparty.ffmpeg.windows]
+url = "https://distribute.openpype.io/thirdparty/ffmpeg-4.4-windows.zip"
+hash = "dd51ba29d64ee238e7c4c3c7301b19754c3f0ee2e2a729c20a0e2789e72db925"
+
+[openpype.thirdparty.ffmpeg.linux]
+url = "https://distribute.openpype.io/thirdparty/ffmpeg-4.4-linux.tgz"
+hash = "10b9beda57cfbb69b9ed0ce896c0c8d99227b26ca8b9f611040c4752e365cbe9"
+
+[openpype.thirdparty.ffmpeg.darwin]
+url = "https://distribute.openpype.io/thirdparty/ffmpeg-4.4-macos.tgz"
+hash = "95f43568338c275f80dc0cab1e1836a2e2270f856f0e7b204440d881dd74fbdb"
+
+[openpype.thirdparty.oiio.windows]
+url = "https://distribute.openpype.io/thirdparty/oiio_tools-2.2.0-windows.zip"
+hash = "fd2e00278e01e85dcee7b4a6969d1a16f13016ec16700fb0366dbb1b1f3c37ad"
+
+[openpype.thirdparty.oiio.linux]
+url = "https://distribute.openpype.io/thirdparty/oiio-2.2.0-linux.tgz"
+hash = "sha256:..."
+
+[openpype.thirdparty.oiio.darwin]
+url = "https://distribute.openpype.io/thirdparty/oiio-2.2.0-darwin.tgz"
+hash = "sha256:..."
\ No newline at end of file
diff --git a/repos/avalon-core b/repos/avalon-core
index 911bd8999a..807e8577a0 160000
--- a/repos/avalon-core
+++ b/repos/avalon-core
@@ -1 +1 @@
-Subproject commit 911bd8999ab0030d0f7412dde6fd545c1a73b62d
+Subproject commit 807e8577a0268580a2934ba38889911adad26eb1
diff --git a/setup.py b/setup.py
index fd589e5251..c096befa34 100644
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,8 @@ install_requires = [
"googleapiclient",
"httplib2",
# Harmony implementation
- "filecmp"
+ "filecmp",
+ "dns"
]
includes = []
@@ -69,7 +70,11 @@ if sys.platform == "win32":
"pythoncom"
])
-build_options = dict(
+
+icon_path = openpype_root / "igniter" / "openpype.ico"
+mac_icon_path = openpype_root / "igniter" / "openpype.icns"
+
+build_exe_options = dict(
packages=install_requires,
includes=includes,
excludes=excludes,
@@ -78,13 +83,16 @@ build_options = dict(
optimize=0
)
-icon_path = openpype_root / "igniter" / "openpype.ico"
+bdist_mac_options = dict(
+ bundle_name="OpenPype",
+ iconfile=mac_icon_path
+)
executables = [
- Executable("start.py", base=None,
- target_name="openpype_console", icon=icon_path.as_posix()),
Executable("start.py", base=base,
- target_name="openpype_gui", icon=icon_path.as_posix())
+ target_name="openpype_gui", icon=icon_path.as_posix()),
+ Executable("start.py", base=None,
+ target_name="openpype_console", icon=icon_path.as_posix())
]
setup(
@@ -93,7 +101,8 @@ setup(
description="Ultimate pipeline",
cmdclass={"build_sphinx": BuildDoc},
options={
- "build_exe": build_options,
+ "build_exe": build_exe_options,
+ "bdist_mac": bdist_mac_options,
"build_sphinx": {
"project": "OpenPype",
"version": __version__,
diff --git a/start.py b/start.py
index a2a03f112c..0295d0ca62 100644
--- a/start.py
+++ b/start.py
@@ -115,6 +115,7 @@ else:
os.path.join(OPENPYPE_ROOT, "dependencies")
)
sys.path.append(frozen_libs)
+ sys.path.insert(0, OPENPYPE_ROOT)
# add stuff from `/dependencies` to PYTHONPATH.
pythonpath = os.getenv("PYTHONPATH", "")
paths = pythonpath.split(os.pathsep)
@@ -123,7 +124,10 @@ else:
import igniter # noqa: E402
from igniter import BootstrapRepos # noqa: E402
-from igniter.tools import get_openpype_path_from_db # noqa
+from igniter.tools import (
+ get_openpype_path_from_db,
+ validate_mongo_connection
+) # noqa
from igniter.bootstrap_repos import OpenPypeVersion # noqa: E402
bootstrap = BootstrapRepos()
@@ -285,6 +289,10 @@ def _process_arguments() -> tuple:
if return_code not in [2, 3]:
sys.exit(return_code)
+ idx = sys.argv.index("igniter")
+ sys.argv.pop(idx)
+ sys.argv.insert(idx, "tray")
+
return use_version, use_staging
@@ -305,20 +313,32 @@ def _determine_mongodb() -> str:
openpype_mongo = os.getenv("OPENPYPE_MONGO", None)
if not openpype_mongo:
# try system keyring
+ try:
+ openpype_mongo = bootstrap.secure_registry.get_item(
+ "openPypeMongo"
+ )
+ except ValueError:
+ pass
+
+ if openpype_mongo:
+ result, msg = validate_mongo_connection(openpype_mongo)
+ if not result:
+ print(msg)
+ openpype_mongo = None
+
+ if not openpype_mongo:
+ print("*** No DB connection string specified.")
+ print("--- launching setup UI ...")
+
+ result = igniter.open_dialog()
+ if result == 0:
+ raise RuntimeError("MongoDB URL was not defined")
+
try:
openpype_mongo = bootstrap.secure_registry.get_item(
"openPypeMongo")
except ValueError:
- print("*** No DB connection string specified.")
- print("--- launching setup UI ...")
- import igniter
- igniter.open_dialog()
-
- try:
- openpype_mongo = bootstrap.secure_registry.get_item(
- "openPypeMongo")
- except ValueError:
- raise RuntimeError("missing mongodb url")
+ raise RuntimeError("Missing MongoDB url")
return openpype_mongo
diff --git a/test_localsystem.txt b/test_localsystem.txt
new file mode 100644
index 0000000000..dde7986af8
--- /dev/null
+++ b/test_localsystem.txt
@@ -0,0 +1 @@
+I have run
diff --git a/tools/build.ps1 b/tools/build.ps1
index 412bb111c1..5283ee4754 100644
--- a/tools/build.ps1
+++ b/tools/build.ps1
@@ -121,6 +121,10 @@ catch {
Exit-WithCode 1
}
+Write-Host ">>> " -NoNewLine -ForegroundColor green
+Write-Host "Making sure submodules are up-to-date ..."
+git submodule update --init --recursive
+
Write-Host ">>> " -NoNewline -ForegroundColor green
Write-Host "Building OpenPype [ " -NoNewline -ForegroundColor white
Write-host $openpype_version -NoNewline -ForegroundColor green
diff --git a/tools/build.sh b/tools/build.sh
index b95e2969c4..d0593a2b2f 100755
--- a/tools/build.sh
+++ b/tools/build.sh
@@ -157,10 +157,33 @@ main () {
install_poetry || { echo -e "${BIRed}!!!${RST} Poetry installation failed"; return; }
fi
+ echo -e "${BIGreen}>>>${RST} Making sure submodules are up-to-date ..."
+ git submodule update --init --recursive
+
echo -e "${BIGreen}>>>${RST} Building ..."
- poetry run python3 "$openpype_root/setup.py" build > "$openpype_root/build/build.log" || { echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return; }
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ poetry run python3 "$openpype_root/setup.py" build > "$openpype_root/build/build.log" || { echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return; }
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ poetry run python3 "$openpype_root/setup.py" bdist_mac > "$openpype_root/build/build.log" || { echo -e "${BIRed}!!!${RST} Build failed, see the build log."; return; }
+ fi
poetry run python3 "$openpype_root/tools/build_dependencies.py"
+ if [[ "$OSTYPE" == "darwin"* ]]; then
+ # fix code signing issue
+ codesign --remove-signature "$openpype_root/build/OpenPype.app/Contents/MacOS/lib/Python"
+ if command -v create-dmg > /dev/null 2>&1; then
+ create-dmg \
+ --volname "OpenPype Installer" \
+ --window-pos 200 120 \
+ --window-size 600 300 \
+ --app-drop-link 100 50 \
+ "$openpype_root/build/OpenPype-Installer.dmg" \
+ "$openpype_root/build/OpenPype.app"
+ else
+ echo -e "${BIYellow}!!!${RST} ${BIWhite}create-dmg${RST} command is not available."
+ fi
+ fi
+
echo -e "${BICyan}>>>${RST} All done. You will find OpenPype and build log in \c"
echo -e "${BIWhite}$openpype_root/build${RST} directory."
}
diff --git a/tools/build_dependencies.py b/tools/build_dependencies.py
index e49e930a70..fb52e2b5fd 100644
--- a/tools/build_dependencies.py
+++ b/tools/build_dependencies.py
@@ -22,6 +22,7 @@ import os
import sys
import site
from distutils.util import get_platform
+import platform
from pathlib import Path
import shutil
import blessed
@@ -76,7 +77,14 @@ _print(f"Working with: {site_pkg}", 2)
build_dir = "exe.{}-{}".format(get_platform(), sys.version[0:3])
# create full path
-build_dir = Path(os.path.dirname(__file__)).parent / "build" / build_dir
+if platform.system().lower() == "darwin":
+ build_dir = Path(os.path.dirname(__file__)).parent.joinpath(
+ "build",
+ "OpenPype.app",
+ "Contents",
+ "MacOS")
+else:
+ build_dir = Path(os.path.dirname(__file__)).parent / "build" / build_dir
_print(f"Using build at {build_dir}", 2)
if not build_dir.exists():
diff --git a/tools/build_win_installer.ps1 b/tools/build_win_installer.ps1
new file mode 100644
index 0000000000..4a4d011258
--- /dev/null
+++ b/tools/build_win_installer.ps1
@@ -0,0 +1,140 @@
+<#
+.SYNOPSIS
+ Helper script to build OpenPype.
+
+.DESCRIPTION
+ This script will detect Python installation, and build OpenPype to `build`
+ directory using existing virtual environment created by Poetry (or
+ by running `/tools/create_venv.ps1`). It will then shuffle dependencies in
+ build folder to optimize for different Python versions (2/3) in Python host.
+
+.EXAMPLE
+
+PS> .\build.ps1
+
+#>
+
+function Start-Progress {
+ param([ScriptBlock]$code)
+ $scroll = "/-\|/-\|"
+ $idx = 0
+ $job = Invoke-Command -ComputerName $env:ComputerName -ScriptBlock { $code } -AsJob
+
+ $origpos = $host.UI.RawUI.CursorPosition
+
+ # $origpos.Y -= 1
+
+ while (($job.State -eq "Running") -and ($job.State -ne "NotStarted"))
+ {
+ $host.UI.RawUI.CursorPosition = $origpos
+ Write-Host $scroll[$idx] -NoNewline
+ $idx++
+ if ($idx -ge $scroll.Length)
+ {
+ $idx = 0
+ }
+ Start-Sleep -Milliseconds 100
+ }
+ # It's over - clear the activity indicator.
+ $host.UI.RawUI.CursorPosition = $origpos
+ Write-Host ' '
+ <#
+ .SYNOPSIS
+ Display spinner for running job
+ .PARAMETER code
+ Job to display spinner for
+ #>
+}
+
+
+function Exit-WithCode($exitcode) {
+ # Only exit this host process if it's a child of another PowerShell parent process...
+ $parentPID = (Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$PID" | Select-Object -Property ParentProcessId).ParentProcessId
+ $parentProcName = (Get-CimInstance -ClassName Win32_Process -Filter "ProcessId=$parentPID" | Select-Object -Property Name).Name
+ if ('powershell.exe' -eq $parentProcName) { $host.SetShouldExit($exitcode) }
+
+ exit $exitcode
+}
+
+function Show-PSWarning() {
+ if ($PSVersionTable.PSVersion.Major -lt 7) {
+ Write-Host "!!! " -NoNewline -ForegroundColor Red
+ Write-Host "You are using old version of PowerShell. $($PSVersionTable.PSVersion.Major).$($PSVersionTable.PSVersion.Minor)"
+ Write-Host "Please update to at least 7.0 - " -NoNewline -ForegroundColor Gray
+ Write-Host "https://github.com/PowerShell/PowerShell/releases" -ForegroundColor White
+ Exit-WithCode 1
+ }
+}
+
+function Install-Poetry() {
+ Write-Host ">>> " -NoNewline -ForegroundColor Green
+ Write-Host "Installing Poetry ... "
+ (Invoke-WebRequest -Uri https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py -UseBasicParsing).Content | python -
+ # add it to PATH
+ $env:PATH = "$($env:PATH);$($env:USERPROFILE)\.poetry\bin"
+}
+
+$art = @"
+
+▒█▀▀▀█ █▀▀█ █▀▀ █▀▀▄ ▒█▀▀█ █░░█ █▀▀█ █▀▀ ▀█▀ ▀█▀ ▀█▀
+▒█░░▒█ █░░█ █▀▀ █░░█ ▒█▄▄█ █▄▄█ █░░█ █▀▀ ▒█░ ▒█░ ▒█░
+▒█▄▄▄█ █▀▀▀ ▀▀▀ ▀░░▀ ▒█░░░ ▄▄▄█ █▀▀▀ ▀▀▀ ▄█▄ ▄█▄ ▄█▄
+ .---= [ by Pype Club ] =---.
+ https://openpype.io
+
+"@
+
+Write-Host $art -ForegroundColor DarkGreen
+
+# Enable if PS 7.x is needed.
+# Show-PSWarning
+
+$current_dir = Get-Location
+$script_dir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
+$openpype_root = (Get-Item $script_dir).parent.FullName
+
+Set-Location -Path $openpype_root
+
+$version_file = Get-Content -Path "$($openpype_root)\openpype\version.py"
+$result = [regex]::Matches($version_file, '__version__ = "(?\d+\.\d+.\d+.*)"')
+$openpype_version = $result[0].Groups['version'].Value
+if (-not $openpype_version) {
+ Write-Host "!!! " -ForegroundColor yellow -NoNewline
+ Write-Host "Cannot determine OpenPype version."
+ Exit-WithCode 1
+}
+$env:BUILD_VERSION = $openpype_version
+
+iscc
+
+Write-Host ">>> " -NoNewline -ForegroundColor green
+Write-Host "Creating OpenPype installer ... " -ForegroundColor white
+
+$build_dir_command = @"
+import sys
+from distutils.util import get_platform
+print('exe.{}-{}'.format(get_platform(), sys.version[0:3]))
+"@
+
+$build_dir = & python -c $build_dir_command
+Write-Host "Build directory ... ${build_dir}" -ForegroundColor white
+$env:BUILD_DIR = $build_dir
+
+if (Get-Command iscc -errorAction SilentlyContinue -ErrorVariable ProcessError)
+{
+ iscc "$openpype_root\inno_setup.iss"
+}else {
+ Write-Host "!!! Cannot find Inno Setup command" -ForegroundColor red
+ Write-Host "!!! You can download it at https://jrsoftware.org/" -ForegroundColor red
+ Exit-WithCode 1
+}
+
+
+Write-Host ">>> " -NoNewline -ForegroundColor green
+Write-Host "restoring current directory"
+Set-Location -Path $current_dir
+
+Write-Host "*** " -NoNewline -ForegroundColor Cyan
+Write-Host "All done. You will find OpenPype installer in " -NoNewLine
+Write-Host "'.\build'" -NoNewline -ForegroundColor Green
+Write-Host " directory."
diff --git a/tools/create_env.ps1 b/tools/create_env.ps1
index 44e1799be8..e72e98e04b 100644
--- a/tools/create_env.ps1
+++ b/tools/create_env.ps1
@@ -133,7 +133,7 @@ if (-not (Test-Path -PathType Leaf -Path "$($openpype_root)\poetry.lock")) {
Write-Host ">>> " -NoNewline -ForegroundColor green
Write-Host "Installing virtual environment from lock."
}
-& poetry install $poetry_verbosity
+& poetry install --no-root $poetry_verbosity
if ($LASTEXITCODE -ne 0) {
Write-Host "!!! " -ForegroundColor yellow -NoNewline
Write-Host "Poetry command failed."
diff --git a/tools/create_env.sh b/tools/create_env.sh
index 7bdb8503fd..04414ddea5 100755
--- a/tools/create_env.sh
+++ b/tools/create_env.sh
@@ -160,7 +160,7 @@ main () {
echo -e "${BIGreen}>>>${RST} Installing dependencies ..."
fi
- poetry install $poetry_verbosity || { echo -e "${BIRed}!!!${RST} Poetry environment installation failed"; return; }
+ poetry install --no-root $poetry_verbosity || { echo -e "${BIRed}!!!${RST} Poetry environment installation failed"; return; }
echo -e "${BIGreen}>>>${RST} Cleaning cache files ..."
clean_pyc
diff --git a/tools/fetch_thirdparty_libs.ps1 b/tools/fetch_thirdparty_libs.ps1
new file mode 100644
index 0000000000..d1b914fac2
--- /dev/null
+++ b/tools/fetch_thirdparty_libs.ps1
@@ -0,0 +1,20 @@
+<#
+.SYNOPSIS
+ Download and extract third-party dependencies for OpenPype.
+
+.DESCRIPTION
+ This will download third-party dependencies specified in pyproject.toml
+ and extract them to vendor/bin folder.
+
+.EXAMPLE
+
+PS> .\fetch_thirdparty_libs.ps1
+
+#>
+$current_dir = Get-Location
+$script_dir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent
+$openpype_root = (Get-Item $script_dir).parent.FullName
+Set-Location -Path $openpype_root
+
+& poetry run python "$($openpype_root)\tools\fetch_thirdparty_libs.py"
+Set-Location -Path $current_dir
diff --git a/tools/fetch_thirdparty_libs.py b/tools/fetch_thirdparty_libs.py
new file mode 100644
index 0000000000..75ee052950
--- /dev/null
+++ b/tools/fetch_thirdparty_libs.py
@@ -0,0 +1,160 @@
+# -*- coding: utf-8 -*-
+"""Fetch, verify and process third-party dependencies of OpenPype.
+
+Those should be defined in `pyproject.toml` in OpenPype sources root.
+
+"""
+import os
+import sys
+import toml
+import shutil
+from pathlib import Path
+from urllib.parse import urlparse
+import requests
+import enlighten
+import platform
+import blessed
+import tempfile
+import math
+import hashlib
+import tarfile
+import zipfile
+import time
+
+
+term = blessed.Terminal()
+manager = enlighten.get_manager()
+hash_buffer_size = 65536
+
+
+def sha256_sum(filename: Path):
+ """Calculate sha256 hash for given file.
+
+ Args:
+ filename (Path): path to file.
+
+ Returns:
+ str: hex hash.
+
+ """
+ _hash = hashlib.sha256()
+ with open(filename, 'rb', buffering=0) as f:
+ buffer = bytearray(128 * 1024)
+ mv = memoryview(buffer)
+ for n in iter(lambda: f.readinto(mv), 0):
+ _hash.update(mv[:n])
+ return _hash.hexdigest()
+
+
+def _print(msg: str, message_type: int = 0) -> None:
+ """Print message to console.
+
+ Args:
+ msg (str): message to print
+ message_type (int): type of message (0 info, 1 error, 2 note)
+
+ """
+ if message_type == 0:
+ header = term.aquamarine3(">>> ")
+ elif message_type == 1:
+ header = term.orangered2("!!! ")
+ elif message_type == 2:
+ header = term.tan1("... ")
+ else:
+ header = term.darkolivegreen3("--- ")
+
+ print("{}{}".format(header, msg))
+
+
+_print("Processing third-party dependencies ...")
+start_time = time.time_ns()
+openpype_root = Path(os.path.dirname(__file__)).parent
+pyproject = toml.load(openpype_root / "pyproject.toml")
+platform_name = platform.system().lower()
+
+try:
+ thirdparty = pyproject["openpype"]["thirdparty"]
+except AttributeError:
+ _print("No third-party libraries specified in pyproject.toml", 1)
+ sys.exit(1)
+
+for k, v in thirdparty.items():
+ _print(f"processing {k}")
+ destination_path = openpype_root / "vendor" / "bin" / k / platform_name
+ url = v.get(platform_name).get("url")
+
+ if not v.get(platform_name):
+ _print(("missing definition for current "
+ f"platform [ {platform_name} ]"), 1)
+ sys.exit(1)
+
+ parsed_url = urlparse(url)
+
+ # check if file is already extracted in /vendor/bin
+ if destination_path.exists():
+ _print("destination path already exists, deleting ...", 2)
+ if destination_path.is_dir():
+ try:
+ shutil.rmtree(destination_path)
+ except OSError as e:
+ _print("cannot delete folder.", 1)
+ raise SystemExit(e)
+
+ # download file
+ _print(f"Downloading {url} ...")
+ with tempfile.TemporaryDirectory() as temp_dir:
+ temp_file = Path(temp_dir) / Path(parsed_url.path).name
+
+ r = requests.get(url, stream=True)
+ content_len = int(r.headers.get('Content-Length', '0')) or None
+ with manager.counter(color='green',
+ total=content_len and math.ceil(content_len / 2 ** 20), # noqa: E501
+ unit='MiB', leave=False) as counter:
+ with open(temp_file, 'wb', buffering=2 ** 24) as file_handle:
+ for chunk in r.iter_content(chunk_size=2 ** 20):
+ file_handle.write(chunk)
+ counter.update()
+
+ # get file with checksum
+ _print("Calculating sha256 ...", 2)
+ calc_checksum = sha256_sum(temp_file)
+ if v.get(platform_name).get("hash") != calc_checksum:
+ _print("Downloaded files checksum invalid.")
+ sys.exit(1)
+
+ _print("File OK", 3)
+ if not destination_path.exists():
+ destination_path.mkdir(parents=True)
+
+ # extract to destination
+ archive_type = temp_file.suffix.lstrip(".")
+ _print(f"Extracting {archive_type} file to {destination_path}")
+ if archive_type in ['zip']:
+ zip_file = zipfile.ZipFile(temp_file)
+ zip_file.extractall(destination_path)
+ zip_file.close()
+
+ elif archive_type in [
+ 'tar', 'tgz', 'tar.gz', 'tar.xz', 'tar.bz2'
+ ]:
+ if archive_type == 'tar':
+ tar_type = 'r:'
+ elif archive_type.endswith('xz'):
+ tar_type = 'r:xz'
+ elif archive_type.endswith('gz'):
+ tar_type = 'r:gz'
+ elif archive_type.endswith('bz2'):
+ tar_type = 'r:bz2'
+ else:
+ tar_type = 'r:*'
+ try:
+ tar_file = tarfile.open(temp_file, tar_type)
+ except tarfile.ReadError:
+ raise SystemExit("corrupted archive")
+ tar_file.extractall(destination_path)
+ tar_file.close()
+ _print("Extraction OK", 3)
+
+end_time = time.time_ns()
+total_time = (end_time - start_time) / 1000000000
+_print(f"Downloading and extracting took {total_time} secs.")
diff --git a/tools/fetch_thirdparty_libs.sh b/tools/fetch_thirdparty_libs.sh
new file mode 100755
index 0000000000..e305b4b3e4
--- /dev/null
+++ b/tools/fetch_thirdparty_libs.sh
@@ -0,0 +1,129 @@
+#!/usr/bin/env bash
+
+# Run Pype Tray
+
+
+art () {
+ cat <<-EOF
+ ____________
+ /\\ ___ \\
+ \\ \\ \\/_\\ \\
+ \\ \\ _____/ ______ ___ ___ ___
+ \\ \\ \\___/ /\\ \\ \\ \\\\ \\\\ \\
+ \\ \\____\\ \\ \\_____\\ \\__\\\\__\\\\__\\
+ \\/____/ \\/_____/ . PYPE Club .
+
+EOF
+}
+
+# Colors for terminal
+
+RST='\033[0m' # Text Reset
+
+# Regular Colors
+Black='\033[0;30m' # Black
+Red='\033[0;31m' # Red
+Green='\033[0;32m' # Green
+Yellow='\033[0;33m' # Yellow
+Blue='\033[0;34m' # Blue
+Purple='\033[0;35m' # Purple
+Cyan='\033[0;36m' # Cyan
+White='\033[0;37m' # White
+
+# Bold
+BBlack='\033[1;30m' # Black
+BRed='\033[1;31m' # Red
+BGreen='\033[1;32m' # Green
+BYellow='\033[1;33m' # Yellow
+BBlue='\033[1;34m' # Blue
+BPurple='\033[1;35m' # Purple
+BCyan='\033[1;36m' # Cyan
+BWhite='\033[1;37m' # White
+
+# Bold High Intensity
+BIBlack='\033[1;90m' # Black
+BIRed='\033[1;91m' # Red
+BIGreen='\033[1;92m' # Green
+BIYellow='\033[1;93m' # Yellow
+BIBlue='\033[1;94m' # Blue
+BIPurple='\033[1;95m' # Purple
+BICyan='\033[1;96m' # Cyan
+BIWhite='\033[1;97m' # White
+
+
+##############################################################################
+# Detect required version of python
+# Globals:
+# colors
+# PYTHON
+# Arguments:
+# None
+# Returns:
+# None
+###############################################################################
+detect_python () {
+ echo -e "${BIGreen}>>>${RST} Using python \c"
+ local version_command="import sys;print('{0}.{1}'.format(sys.version_info[0], sys.version_info[1]))"
+ local python_version="$(python3 <<< ${version_command})"
+ oIFS="$IFS"
+ IFS=.
+ set -- $python_version
+ IFS="$oIFS"
+ if [ "$1" -ge "3" ] && [ "$2" -ge "6" ] ; then
+ if [ "$2" -gt "7" ] ; then
+ echo -e "${BIWhite}[${RST} ${BIRed}$1.$2 ${BIWhite}]${RST} - ${BIRed}FAILED${RST} ${BIYellow}Version is new and unsupported, use${RST} ${BIPurple}3.7.x${RST}"; return 1;
+ else
+ echo -e "${BIWhite}[${RST} ${BIGreen}$1.$2${RST} ${BIWhite}]${RST}"
+ fi
+ PYTHON="python3"
+ else
+ command -v python3 >/dev/null 2>&1 || { echo -e "${BIRed}$1.$2$ - ${BIRed}FAILED${RST} ${BIYellow}Version is old and unsupported${RST}"; return 1; }
+ fi
+}
+
+##############################################################################
+# Clean pyc files in specified directory
+# Globals:
+# None
+# Arguments:
+# Optional path to clean
+# Returns:
+# None
+###############################################################################
+clean_pyc () {
+ local path
+ path=$pype_root
+ echo -e "${BIGreen}>>>${RST} Cleaning pyc at [ ${BIWhite}$path${RST} ] ... \c"
+ find "$path" -regex '^.*\(__pycache__\|\.py[co]\)$' -delete
+ echo -e "${BIGreen}DONE${RST}"
+}
+
+##############################################################################
+# Return absolute path
+# Globals:
+# None
+# Arguments:
+# Path to resolve
+# Returns:
+# None
+###############################################################################
+realpath () {
+ echo $(cd $(dirname "$1"); pwd)/$(basename "$1")
+}
+
+# Main
+main () {
+ echo -e "${BGreen}"
+ art
+ echo -e "${RST}"
+ detect_python || return 1
+
+ # Directories
+ pype_root=$(realpath $(dirname $(dirname "${BASH_SOURCE[0]}")))
+ pushd "$pype_root" > /dev/null || return > /dev/null
+
+ echo -e "${BIGreen}>>>${RST} Running Pype tool ..."
+ poetry run python3 "$pype_root/tools/fetch_thirdparty_libs.py"
+}
+
+main
\ No newline at end of file
diff --git a/tools/run_mongo.sh b/tools/run_mongo.sh
index 1c788abcaf..8c94fcf881 100755
--- a/tools/run_mongo.sh
+++ b/tools/run_mongo.sh
@@ -82,3 +82,4 @@ main () {
echo -e "${BIGreen}>>>${RST} Detached to background."
}
+main
diff --git a/vendor/deadline/custom/plugins/GlobalJobPreLoad.py b/vendor/deadline/custom/plugins/GlobalJobPreLoad.py
index d1287dd213..5e64605271 100644
--- a/vendor/deadline/custom/plugins/GlobalJobPreLoad.py
+++ b/vendor/deadline/custom/plugins/GlobalJobPreLoad.py
@@ -60,7 +60,7 @@ def inject_openpype_environment(deadlinePlugin):
with open(export_url) as fp:
contents = json.load(fp)
for key, value in contents.items():
- deadlinePlugin.SetEnvironmentVariable(key, value)
+ deadlinePlugin.SetProcessEnvironmentVariable(key, value)
os.remove(export_url)
@@ -162,4 +162,3 @@ def __main__(deadlinePlugin):
inject_openpype_environment(deadlinePlugin)
else:
pype(deadlinePlugin) # backward compatibility with Pype2
-
diff --git a/website/docs/artist_hosts_hiero.md b/website/docs/artist_hosts_hiero.md
new file mode 100644
index 0000000000..4ada1fba2d
--- /dev/null
+++ b/website/docs/artist_hosts_hiero.md
@@ -0,0 +1,193 @@
+---
+id: artist_hosts_hiero
+title: Hiero
+sidebar_label: Hiero / Nuke Studio
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+:::note
+All the information also applies to **_Nuke Studio_**(NKS), but for simplicity we only refer to Hiero/NKS. The workflows are identical for both. We are supporting versions **`11.0`** and above.
+:::
+
+
+
+## OpenPype global tools
+
+- [Work Files](artist_tools.md#workfiles)
+- [Create](artist_tools.md#creator)
+- [Load](artist_tools.md#loader)
+- [Manage (Inventory)](artist_tools.md#inventory)
+- [Publish](artist_tools.md#publisher)
+
+
+## Hiero specific tools
+
+
+
+
+
+
+### Create Default Tags
+
+This tool will recreate all necessary OpenPype tags needed for successful publishing. It is automatically ran at start of the Hiero/NKS. Use this tool to manually re-create all the tags if you accidentaly delete them, or you want to reset them to default values.
+
+#### Result
+
+- Will create tags in Tags bin in case there were none
+- Will set all tags to default values if they have been altered
+
+
+
+### Apply Colorspace Project
+
+This tool will set any defined colorspace definition from OpenPype `Settings / Project / Anatomy / Color Management and Output Formats / Hiero / Workfile` to Hiero `menu / Project / Edit Settings / Color Management tab`
+
+#### Result
+
+- Define corect color management settings on project
+
+
+
+### Apply Colorspace Clips
+
+This tool will set any defined colorspace definition from OpenPype `Settings / Project / Anatomy / Color Management and Output Formats / Hiero / Colorspace on Inputs by regex detection` to any matching clip's source path.
+
+#### Result
+
+- Set correct `Set Media Color Transform` on each clip of active timeline if it matches defined expressions
+
+
+
+With OpenPype, you can use Hiero/NKS as a starting point for creating a project's **shots** as *assets* from timeline clips with its *hierarchycal parents* like **episodes**, **sequences**, **folders**, and its child **tasks**. Most importantly it will create **versions** of plate *subsets*, with or without **reference video**. Publishig is naturally creating clip's **thumbnails** and assigns it to shot *asset*. Hiero is also publishing **audio** *subset* and various **soft-effects** either as retiming component as part of published plates or **color-tranformations**, that will be evailable later on for compositor artists to use either as *viewport input-process* or *loaded nodes* in graph editor.
+
+
+### Preparing timeline for conversion to instances
+Because we don't support on-fly data conversion so in case of working with raw camera sources or some other formats which need to be converted for 2D/3D work. We suggest to convert those before and reconform the timeline. Before any clips in timeline could be converted to publishable instances we recomend following.
+1. Merge all tracks which supposed to be one and they are multipy only because of editor's style
+2. Rename tracks to follow basic structure > if only one layer then `main` in case of multiple layer (elements) for one shot then `main`, and other elements for example: `bg`, `greenscreen`, `fg01`, `fg02`, `display01`, etc. please avoid using [-/_.,%&*] or spaces. These names will be later used in *subset* name creation as `{family}{trackName}` so for example **plateMain** or **plateFg01**
+3. Define correct `Set Media Color Transform` at all clips as those will be also published to metadata and used for later loading with correct color transformation.
+4. Reviewable video material which you wish to be used as preview videos on any supported Projec manager platform (Ftrack) has to be added ideally to track named **review**. This can be offline edit used as reference video for 2D/3D artists. This video material can be edited to fit length of **main** timeline track or it cand be one long video clip under all clips in **main** track, because OpenPype will trim this to appropriate length with use of FFMPEG. Please be avare we only support MP4(h264) or JPG sequence at the moment.
+
+
+
+
+### Converting timeline clips to instances
+
+Every clip on timeline which is inteded to be published has to be converted to publishable instance.
+
+
+
+In OpenPype it is done by tagging a clip with our own metadata. Select all clips you wish to convert and `menu > OpenPype > Create`.
+
+
+Then chose `Create Publishable Clip` in **Instance Creator** dialogue.
+
+
+Then you can alter Subset name, but this will be changed dynamically and replaces with timeline's track name.
+
+
+Keep **Use selection** on.
+
+
+Hit **Create**
+
+
+
+Dialogue `Pype publish attributes creator` will open. Here you can define instance properties. If you wish to rename clips dynamically during creation then Keep **Rename clips** ticked.
+
+
+In case you wish to use *multiple elements of shots* workflow then keep **Enamble vertical sync** ticked on and define correct hero track which is holding main plates, this is usually the **main** track.
+
+
+Subset name is created dynamically if `` is selected on **Subset name**.
+
+
+I case you wish to publish reviewable video as explained above then find the appropriate track from drop down menu **Use review track**. Usually named `review`
+
+
+Hover above each input field for help.
+
+
+Handles can be defined here to. In case you wish to have individual clip set differently we recomend to set here the default value and later change those in the created OpenPype tag's metadata under `handleStart` and `handleEnd` properties (look bellow for details)
+
+
+After you hit **Ok** tags are added to selected clips (except clips in **review** tracks).
+
+
+If you wish to change any individual propertie of the shot then you are able to do it here. In this example we can change `handleStart` and `handleEnd` to some other values.
+
diff --git a/website/docs/artist_hosts_maya.md b/website/docs/artist_hosts_maya.md
index 1ed326ebe7..d19bde7b49 100644
--- a/website/docs/artist_hosts_maya.md
+++ b/website/docs/artist_hosts_maya.md
@@ -691,3 +691,27 @@ under selected hierarchies and match them with shapes loaded with rig (published
under `input_SET`). This mechanism uses *cbId* attribute on those shapes.
If match is found shapes are connected using their `outMesh` and `outMesh`. Thus you can easily connect existing animation to loaded rig.
:::
+
+## Using Redshift Proxies
+
+OpenPype supports working with Redshift Proxy files. You can create Redshift Proxy from almost
+any hierarchy in Maya and it will be included there. Redshift can export animation
+proxy file per frame.
+
+### Creating Redshift Proxy
+
+To mark data to publish as Redshift Proxy, select them in Maya and - **OpenPype → Create ...** and
+then select **Redshift Proxy**. You can name your subset and hit **Create** button.
+
+You can enable animation in Attribute Editor:
+
+
+
+### Publishing Redshift Proxies
+
+Once data are marked as Redshift Proxy instance, they can be published - **OpenPype → Publish ...**
+
+### Using Redshift Proxies
+
+Published proxy files can be loaded with OpenPype Loader. It will create mesh and attach Redshift Proxy
+parameters to it - Redshift will then represent proxy with bounding box.
diff --git a/website/docs/artist_hosts_nukestudio.md b/website/docs/artist_hosts_nukestudio.md
deleted file mode 100644
index 23301f53bf..0000000000
--- a/website/docs/artist_hosts_nukestudio.md
+++ /dev/null
@@ -1,284 +0,0 @@
----
-id: artist_hosts_nukestudio
-title: Hiero
-sidebar_label: Hiero / Nuke Studio
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-:::note
-All the information also applies to **_Nuke Studio_**, but for simplicity we only refer to Hiero. The workflows are identical for both. We are supporting versions **`11.0`** and above.
-:::
-
-
-## Hiero specific tools
-
-
-
-
-
-
-### Create Default Tags
-
-This tool will recreate all necessary OpenPype tags needed for successful publishing. It is automatically ran at start of the Hiero. Use this tool to manually re-create all the tags if you accidentaly delete them, or you want to reset them to default values.
-
-
-
-#### Result
-
-- Will create tags in Tags bin in case there were none
-- Will set all tags to default values if they have been altered
-
-## Publishing Shots
-
-
-
-
-
-
-With OpenPype, you can use Hiero as a starting point for creating project hierarchy in avalon and ftrack database (episodes, sequences, shots, folders etc.), publishing plates, reference quicktimes, audio and various soft effects that will be evailable later on for compositors and 3D artist to use.
-
-There are two ways to `Publish` data and create shots in database from Hiero. Use either context menu on right clicking selected clips or go to top `menu > OpenPype > Publish`.
-
-
-
-Keep in mind that the publishing currently works on selected shots
-
-Shot names for all the related plates that you want to publish (subsets) has to be the same to be correctly paired together (as it is shown in image).
-Note the layer **review** which contains `plateMainReview`.
-This media is just h264, 1920x1080 video for that will be used as preview of the actual `plateMain` subset and will be uploaded to Ftrack. We explain how to work with review tag in [**Reviewing**](#reviewing).
-
-
-:::important
-To to successfuly publish a shot from Hiero:
-1. At least one clip of your shot must be tagged with `Hierarchy`, `subset` and `handleStart/End`.
-2. Your source media must be pre-cut to correct length (including handles)
-:::
-
-### Tagging
-
-
-OpenPype's custom tags are used for defining shot parameters and to define which clips and how they are going to be published.
-
-If you want to add any properties to your clips you'll need to adjust values on the given tag and then drag it onto the clip.
-
-
-
-
-
-
-
-
-1. double click on preferable tag and drag&drop it to selected clip(s)
-2. Basic set of tags on clip (usually subset: plateMain)
-3. Additionally select clip and edit its parameters
-4. Edit parameters here but do not touch `family`
-
-
-
-
-:::important
-Only clips with `subset` will be directly processed for publishing.
-:::
-
-### Custom Tags Details
-
-#### Asset related
-| Icon | Description | Editable | Options |
-| ------------------- | ---------------------------------------------------------------------------------- | ------------------------------------- | ---------------------------------------------------------------------------------------- |
-| ![Hierarchy][hi] | Define parent hierarchy of the shot. Usually combined with one of subset tags. | root, folder, sequence, episode, shot | example: {sequence} = name of Hiero sequence or overwrite by any text without `-` or `_` |
-| ![Frame Start][fst] | Set start frame of the shot. Using `"source"` will keep original frame numbers. | number | int `number` or `"source"` |
-
-
-#### Subsets
-
-| Icon | Description | Editable | Options |
-| ------------------ | ------------------------------------------------------------------------------ | -------- | --------------------------------- |
-| ![Review][rew] | Choose which track holds review quicktime for the given shot. | track | `"review"` or other track name |
-| ![Plate Main][pmn] | Main plate subset identifier | subset | `"main"` or other |
-| ![Plate FG][pfg] | Foreground plate subset identifier (comped over the main plate) | subset | `"Fg##"` or other |
-| ![Plate BG][pbg] | Background plate subset identifier (comped under the main plate) | subset | `"Bg##"` or other |
-| ![Plate Ref][ref] | Reference plate subset identifier | subset | `"Ref"` or other |
-
-#### Subset's attributes
-
-| Icon | Description | Editable | Options |
-| ------------------ | --------------------------------------------------------------------------------- | ------------------- | ----------------------------- |
-| ![Resolution][rsl] | Use source resolution instead of sequence settings. | none | |
-| ![Retiming][rtm] | Publish retime metadata to shot if retime or time-warp found on clip | marginIn, marginOut | int `number` frame cushioning |
-| ![Lens][lns] | Specify lens focal length metadata (work in progress) | focalLengthMm | int `number` |
-
-#### Handles
-
-| Icon | Description | Editable | Options |
-| --------------------- | ---------------------------------------------------------------------------- | -------- | -------------------------- |
-| ![Handles Start][ahs] | Handles at the start of the clip/shot | value | change to any int `number` |
-| ![Handles End][ahe] | Handles at the end of a clip/shot | value | change to any int `number` |
-
-[hi]: assets/nks_icons/hierarchy.png
-
-[ahs]: assets/nks_icons/3_add_handles_start.png
-
-[ahe]: assets/nks_icons/1_add_handles_end.png
-
-[rsl]: assets/nks_icons/resolution.png
-
-[rtm]: assets/nks_icons/retiming.png
-
-[rew]: assets/nks_icons/review.png
-
-[pmn]: assets/nks_icons/z_layer_main.png
-
-[pfg]: assets/nks_icons/z_layer_fg.png
-
-[pbg]: assets/nks_icons/z_layer_bg.png
-
-[lns]: assets/nks_icons/lense1.png
-
-[fst]: assets/nks_icons/frame_start.png
-
-[ref]: assets/nks_icons/reference.png
-
-### Handles
-
-OpenPype requires handle information in shot metadata even if they are set to 0.
-For this you need to add handles tags to the main clip (Should be the one with Hierarchy tag).
-This way we are defining a shot property. In case you wish to have different
-handles on other subsets (e.g. when plateBG is longer than plateFG) you can add handle tags with different value to this longer plate.
-
-If you wish to have different handles length (say 100) than one of the default tags, simply drag `start: add 10 frames` to your clip
-and then go to clips tags, find the tag, then replace 10 for 100 in name and also change value to 100.
-This is also explained following tutorial [`Extending premade handles tags`](#extending-premade-handles-tags)
-
-:::caution
-Even if you don't need any handles you have to add `start: add 0 frames` and `end: add 0 frames` tags to the clip with Hierarchy tag.
-:::
-
-### Retiming
-
-OpenPype is also able to publish retiming parameters into the database.
-Any clip with **editorial**/**retime** or **TimeWarp** soft effect has to be tagged with `Retiming` tag, if you want this information preserved during publishing.
-
-Any animation on **TimeWarp** is also preserved and reapplied in _Nuke_.
-
-You can only combine **retime** and with a single **Timewarp**.
-
-### Reviewing
-
-There are two ways to publish reviewable **h264 mov** into OpenPype (and Ftrack).
-
-
-
-
-
-
-
-The first one uses the Review Tag pointing to the track that holds the reviewable quicktimes for plates.
-
-This tag metadata has `track` key inside that points to `review` track by default. If you drop this tag onto any publishable clip on the timeline you're telling OpenPype "you will find quicktime version of this plate on `review` track (clips must have the same name)"
-
-In the image on the right we dropped it to **plateMain** clip. Then we renamed the layer tha hold reviewable quicktime called `plateMainReview`. You can see that the clip names are the same.
-
-
-
-
-
-
-
-
-
-1. `-review` suffix is added to publishing item label if any reviewable file is found
-2. `plateMain` clip is holding the Review tag
-3. layer name is `review` as it is used as default in _Review_ Tag in _track_
-4. name of clip is the same across all subsets
-
-
-
-
-
-
-
-
-Second way would be to add the **h264 mov 1920x1080** into the same folder
-as image sequence. The name of the file has to be the same as image sequence.
-Publisher will pick this file up and add it to the files list during collecting.
-This will also add `"- review"` to instance label in **Publish**.
-
-Example:
-
-- img seq: `image_sequence_name.0001.exr`
-- mov: `image_sequence_name.mov`
-
-
-
-
-
---------------
-
-
-### LUT Workflow
-
-
-
-
-
-It is possible to publish Hiero soft effects for compositors to use later on. You can add the effect to a particular clip or to whole layer as shows on the picture. All clips
-below the `Video 6` layer (green arrow) will be published with the **LUT** subset which combines all the colour corrections from he soft effects. Any clips above the `Video 6` layer will have no **LUT** published with them.
-
-
-
-
-Any external Lut files used in the soft effects will be copied over to `resources` of the published subset folder `lutPlateMain` (in our example).
-
-:::note
-
-
-
-
-You cannot currently publish soft effects on their own because at the moment we only support soft effects as a part of other subset publishing. Image is demonstrating successful publishing.
-
-