Merge remote-tracking branch 'origin/develop' into feature/OP-3553_update-to-python-39

This commit is contained in:
Ondřej Samohel 2022-08-29 12:22:54 +02:00
commit c5c83b0a8d
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
651 changed files with 61253 additions and 8045 deletions

3
.gitignore vendored
View file

@ -102,5 +102,8 @@ website/.docusaurus
.poetry/
.python-version
.editorconfig
.pre-commit-config.yaml
mypy.ini
tools/run_eventserver.*

2
.gitmodules vendored
View file

@ -4,4 +4,4 @@
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git
url = https://github.com/EvotecIT/PSWriteColor.git

View file

@ -1,129 +1,154 @@
# Changelog
## [3.12.2-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.1-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.0...HEAD)
### 📖 Documentation
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
**🆕 New features**
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
**🚀 Enhancements**
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
- Ftrack: Trigger custom ftrack topic of project structure creation [\#3506](https://github.com/pypeclub/OpenPype/pull/3506)
- Settings UI: Add extract to file action on project view [\#3505](https://github.com/pypeclub/OpenPype/pull/3505)
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
- General: Event system [\#3499](https://github.com/pypeclub/OpenPype/pull/3499)
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
- Nuke: load clip with options from settings [\#3497](https://github.com/pypeclub/OpenPype/pull/3497)
- TrayPublisher: implemented render\_mov\_batch [\#3486](https://github.com/pypeclub/OpenPype/pull/3486)
- Migrate basic families to the new Tray Publisher [\#3469](https://github.com/pypeclub/OpenPype/pull/3469)
- Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720)
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
- Ftrack: More logs related to auto sync value change [\#3671](https://github.com/pypeclub/OpenPype/pull/3671)
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
**🐛 Bug fixes**
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
- General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741)
- Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721)
- Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716)
- Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709)
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
- RoyalRender: handle host name that is not set [\#3695](https://github.com/pypeclub/OpenPype/pull/3695)
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
**🔀 Refactored code**
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- TimersManager: Use query functions [\#3495](https://github.com/pypeclub/OpenPype/pull/3495)
- General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744)
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736)
- Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734)
- General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731)
- AfterEffects: Move AE functions from general lib [\#3730](https://github.com/pypeclub/OpenPype/pull/3730)
- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729)
- AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728)
- General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725)
- Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724)
- General: Move subset name functionality [\#3723](https://github.com/pypeclub/OpenPype/pull/3723)
- General: Move creators plugin getter [\#3714](https://github.com/pypeclub/OpenPype/pull/3714)
- General: Move constants from lib to client [\#3713](https://github.com/pypeclub/OpenPype/pull/3713)
- Loader: Subset groups using client operations [\#3710](https://github.com/pypeclub/OpenPype/pull/3710)
- TVPaint: Defined as module [\#3707](https://github.com/pypeclub/OpenPype/pull/3707)
- StandalonePublisher: Define StandalonePublisher as module [\#3706](https://github.com/pypeclub/OpenPype/pull/3706)
- TrayPublisher: Define TrayPublisher as module [\#3705](https://github.com/pypeclub/OpenPype/pull/3705)
- General: Move context specific functions to context tools [\#3702](https://github.com/pypeclub/OpenPype/pull/3702)
**Merged pull requests:**
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
**🚀 Enhancements**
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
- Ftrack: Set task status on farm publishing [\#3680](https://github.com/pypeclub/OpenPype/pull/3680)
- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675)
- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661)
- General: Optimized OCIO configs [\#3650](https://github.com/pypeclub/OpenPype/pull/3650)
**🐛 Bug fixes**
- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691)
- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656)
- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644)
- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643)
- General: Hero version representations have full context [\#3638](https://github.com/pypeclub/OpenPype/pull/3638)
- Nuke: color settings for render write node is working now [\#3632](https://github.com/pypeclub/OpenPype/pull/3632)
- Maya: FBX support for update in reference loader [\#3631](https://github.com/pypeclub/OpenPype/pull/3631)
**🔀 Refactored code**
- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673)
- Resolve: Match folder structure to other hosts [\#3653](https://github.com/pypeclub/OpenPype/pull/3653)
- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647)
- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639)
- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645)
- Kitsu|Fix: Movie project type fails & first loop children names [\#3636](https://github.com/pypeclub/OpenPype/pull/3636)
- fix the bug of failing to extract look when UDIMs format used in AiImage [\#3628](https://github.com/pypeclub/OpenPype/pull/3628)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
**🆕 New features**
- Support for mutliple installed versions - 3.13 [\#3605](https://github.com/pypeclub/OpenPype/pull/3605)
**🚀 Enhancements**
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
- Ftrack: Comment template can contain optional keys [\#3615](https://github.com/pypeclub/OpenPype/pull/3615)
- Ftrack: Add more metadata to ftrack components [\#3612](https://github.com/pypeclub/OpenPype/pull/3612)
**🐛 Bug fixes**
- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625)
- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622)
- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621)
- General: Extract review aspect ratio scale is calculated by ffmpeg [\#3620](https://github.com/pypeclub/OpenPype/pull/3620)
- Maya: Fix types of default settings [\#3617](https://github.com/pypeclub/OpenPype/pull/3617)
- Integrator: Don't force to have dot before frame [\#3611](https://github.com/pypeclub/OpenPype/pull/3611)
- AfterEffects: refactored integrate doesnt work formulti frame publishes [\#3610](https://github.com/pypeclub/OpenPype/pull/3610)
- Maya look data contents fails with custom attribute on group [\#3607](https://github.com/pypeclub/OpenPype/pull/3607)
- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600)
**🔀 Refactored code**
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
- General: Naive implementation of document create, update, delete [\#3601](https://github.com/pypeclub/OpenPype/pull/3601)
**Merged pull requests:**
- Webpublisher: timeout for PS studio processing [\#3619](https://github.com/pypeclub/OpenPype/pull/3619)
- Core: translated validate\_containers.py into New publisher style [\#3614](https://github.com/pypeclub/OpenPype/pull/3614)
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements**
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
- NewPublisher: Align creator attributes from top to bottom [\#3487](https://github.com/pypeclub/OpenPype/pull/3487)
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Hiero: Add custom scripts menu [\#3425](https://github.com/pypeclub/OpenPype/pull/3425)
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
**🐛 Bug fixes**
- TrayPublisher: Keep use instance label in list view [\#3493](https://github.com/pypeclub/OpenPype/pull/3493)
- General: Extract review use first frame of input sequence [\#3491](https://github.com/pypeclub/OpenPype/pull/3491)
- General: Fix Plist loading for application launch [\#3485](https://github.com/pypeclub/OpenPype/pull/3485)
- Nuke: Workfile tools open on start [\#3479](https://github.com/pypeclub/OpenPype/pull/3479)
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
**🔀 Refactored code**
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
### 📖 Documentation
- Fix typo in documentation: pyenv on mac [\#3417](https://github.com/pypeclub/OpenPype/pull/3417)
- Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401)
**🚀 Enhancements**
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411)
**🐛 Bug fixes**
- NewPublisher: Fix subset name change on change of creator plugin [\#3420](https://github.com/pypeclub/OpenPype/pull/3420)
- Bug: fix invalid avalon import [\#3418](https://github.com/pypeclub/OpenPype/pull/3418)
- Nuke: Fix keyword argument in query function [\#3414](https://github.com/pypeclub/OpenPype/pull/3414)
- Houdini: fix loading and updating vbd/bgeo sequences [\#3408](https://github.com/pypeclub/OpenPype/pull/3408)
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
**🔀 Refactored code**
- Unreal: Use client query functions [\#3421](https://github.com/pypeclub/OpenPype/pull/3421)
- General: Move editorial lib to pipeline [\#3419](https://github.com/pypeclub/OpenPype/pull/3419)
- Kitsu: renaming to plural func sync\_all\_projects [\#3397](https://github.com/pypeclub/OpenPype/pull/3397)
- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395)
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)

View file

@ -122,7 +122,7 @@ class OpenPypeVersion(semver.VersionInfo):
if self.staging:
if kwargs.get("build"):
if "staging" not in kwargs.get("build"):
kwargs["build"] = "{}-staging".format(kwargs.get("build"))
kwargs["build"] = f"{kwargs.get('build')}-staging"
else:
kwargs["build"] = "staging"
@ -136,8 +136,7 @@ class OpenPypeVersion(semver.VersionInfo):
return bool(result and self.staging == other.staging)
def __repr__(self):
return "<{}: {} - path={}>".format(
self.__class__.__name__, str(self), self.path)
return f"<{self.__class__.__name__}: {str(self)} - path={self.path}>"
def __lt__(self, other: OpenPypeVersion):
result = super().__lt__(other)
@ -232,10 +231,7 @@ class OpenPypeVersion(semver.VersionInfo):
return openpype_version
def __hash__(self):
if self.path:
return hash(self.path)
else:
return hash(str(self))
return hash(self.path) if self.path else hash(str(self))
@staticmethod
def is_version_in_dir(
@ -384,7 +380,8 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_local_versions(
cls, production: bool = None, staging: bool = None
cls, production: bool = None,
staging: bool = None
) -> List:
"""Get all versions available on this machine.
@ -394,6 +391,10 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
Returns:
list: of compatible versions available on the machine.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -410,10 +411,10 @@ class OpenPypeVersion(semver.VersionInfo):
if not production and not staging:
return []
# DEPRECATED: backwards compatible way to look for versions in root
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
versions = OpenPypeVersion.get_versions_from_directory(
dir_to_search
)
versions = OpenPypeVersion.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
if version.is_staging():
@ -425,7 +426,8 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_remote_versions(
cls, production: bool = None, staging: bool = None
cls, production: bool = None,
staging: bool = None
) -> List:
"""Get all versions available in OpenPype Path.
@ -435,6 +437,7 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -469,6 +472,7 @@ class OpenPypeVersion(semver.VersionInfo):
return []
versions = cls.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
if version.is_staging():
@ -479,7 +483,8 @@ class OpenPypeVersion(semver.VersionInfo):
return list(sorted(set(filtered_versions)))
@staticmethod
def get_versions_from_directory(openpype_dir: Path) -> List:
def get_versions_from_directory(
openpype_dir: Path) -> List:
"""Get all detected OpenPype versions in directory.
Args:
@ -492,15 +497,22 @@ class OpenPypeVersion(semver.VersionInfo):
ValueError: if invalid path is specified.
"""
openpype_versions = []
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
return openpype_versions
_openpype_versions = []
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
# if the item is directory with major.minor version, dive deeper
# if file, strip extension, in case of dir not.
if item.is_dir() and re.match(r"^\d+\.\d+$", item.name):
_versions = OpenPypeVersion.get_versions_from_directory(
item)
if _versions:
openpype_versions += _versions
# if file exists, strip extension, in case of dir don't.
name = item.name if item.is_dir() else item.stem
result = OpenPypeVersion.version_in_str(name)
@ -519,9 +531,9 @@ class OpenPypeVersion(semver.VersionInfo):
continue
detected_version.path = item
_openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
return sorted(_openpype_versions)
return sorted(openpype_versions)
@staticmethod
def get_installed_version_str() -> str:
@ -550,13 +562,13 @@ class OpenPypeVersion(semver.VersionInfo):
staging: bool = False,
local: bool = None,
remote: bool = None
) -> OpenPypeVersion:
"""Get latest available version.
) -> Union[OpenPypeVersion, None]:
"""Get the latest available version.
The version does not contain information about path and source.
This is utility version to get latest version from all found. Build
version is not listed if staging is enabled.
This is utility version to get the latest version from all found.
Build version is not listed if staging is enabled.
Arguments 'local' and 'remote' define if local and remote repository
versions are used. All versions are used if both are not set (or set
@ -568,6 +580,10 @@ class OpenPypeVersion(semver.VersionInfo):
staging (bool, optional): List staging versions if True.
local (bool, optional): List local versions if True.
remote (bool, optional): List remote versions if True.
Returns:
Latest OpenPypeVersion or None
"""
if local is None and remote is None:
local = True
@ -621,6 +637,21 @@ class OpenPypeVersion(semver.VersionInfo):
return None
return OpenPypeVersion(version=result)
def is_compatible(self, version: OpenPypeVersion):
"""Test build compatibility.
This will simply compare major and minor versions (ignoring patch
and the rest).
Args:
version (OpenPypeVersion): Version to check compatibility with.
Returns:
bool: if the version is compatible
"""
return self.major == version.major and self.minor == version.minor
class BootstrapRepos:
"""Class for bootstrapping local OpenPype installation.
@ -714,9 +745,9 @@ class BootstrapRepos:
self, repo_dir: Path = None) -> Union[OpenPypeVersion, None]:
"""Copy zip created from OpenPype repositories to user data dir.
This detect OpenPype version either in local "live" OpenPype
This detects OpenPype version either in local "live" OpenPype
repository or in user provided path. Then it will zip it in temporary
directory and finally it will move it to destination which is user
directory, and finally it will move it to destination which is user
data directory. Existing files will be replaced.
Args:
@ -727,7 +758,7 @@ class BootstrapRepos:
"""
# if repo dir is not set, we detect local "live" OpenPype repository
# version and use it as a source. Otherwise repo_dir is user
# version and use it as a source. Otherwise, repo_dir is user
# entered location.
if repo_dir:
version = self.get_version(repo_dir)
@ -741,8 +772,9 @@ class BootstrapRepos:
return
# create destination directory
if not self.data_dir.exists():
self.data_dir.mkdir(parents=True)
destination = self.data_dir / f"{installed_version.major}.{installed_version.minor}" # noqa
if not destination.exists():
destination.mkdir(parents=True)
# create zip inside temporary directory.
with tempfile.TemporaryDirectory() as temp_dir:
@ -770,7 +802,9 @@ class BootstrapRepos:
Path to moved zip on success.
"""
destination = self.data_dir / zip_file.name
version = OpenPypeVersion.version_in_str(zip_file.name)
destination_dir = self.data_dir / f"{version.major}.{version.minor}"
destination = destination_dir / zip_file.name
if destination.exists():
self._print(
@ -782,7 +816,7 @@ class BootstrapRepos:
self._print(str(e), LOG_ERROR, exc_info=True)
return None
try:
shutil.move(zip_file.as_posix(), self.data_dir.as_posix())
shutil.move(zip_file.as_posix(), destination_dir.as_posix())
except shutil.Error as e:
self._print(str(e), LOG_ERROR, exc_info=True)
return None
@ -995,6 +1029,16 @@ class BootstrapRepos:
@staticmethod
def _validate_dir(path: Path) -> tuple:
"""Validate checksums in a given path.
Args:
path (Path): path to folder to validate.
Returns:
tuple(bool, str): returns status and reason as a bool
and str in a tuple.
"""
checksums_file = Path(path / "checksums")
if not checksums_file.exists():
# FIXME: This should be set to False sometimes in the future
@ -1076,11 +1120,24 @@ class BootstrapRepos:
sys.path.insert(0, directory.as_posix())
@staticmethod
def find_openpype_version(version, staging):
def find_openpype_version(
version: Union[str, OpenPypeVersion],
staging: bool
) -> Union[OpenPypeVersion, None]:
"""Find location of specified OpenPype version.
Args:
version (Union[str, OpenPypeVersion): Version to find.
staging (bool): Filter staging versions.
Returns:
requested OpenPypeVersion.
"""
installed_version = OpenPypeVersion.get_installed_version()
if isinstance(version, str):
version = OpenPypeVersion(version=version)
installed_version = OpenPypeVersion.get_installed_version()
if installed_version == version:
return installed_version
@ -1107,7 +1164,18 @@ class BootstrapRepos:
return None
@staticmethod
def find_latest_openpype_version(staging):
def find_latest_openpype_version(
staging: bool
) -> Union[OpenPypeVersion, None]:
"""Find the latest available OpenPype version in all location.
Args:
staging (bool): True to look for staging versions.
Returns:
Latest OpenPype version on None if nothing was found.
"""
installed_version = OpenPypeVersion.get_installed_version()
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
@ -1138,7 +1206,8 @@ class BootstrapRepos:
self,
openpype_path: Union[Path, str] = None,
staging: bool = False,
include_zips: bool = False) -> Union[List[OpenPypeVersion], None]:
include_zips: bool = False
) -> Union[List[OpenPypeVersion], None]:
"""Get ordered dict of detected OpenPype version.
Resolution order for OpenPype is following:
@ -1172,30 +1241,38 @@ class BootstrapRepos:
("Finding OpenPype in non-filesystem locations is"
" not implemented yet."))
dir_to_search = self.data_dir
user_versions = self.get_openpype_versions(self.data_dir, staging)
# if we have openpype_path specified, search only there.
# if checks bellow for OPENPYPE_PATH and registry fails, use data_dir
# DEPRECATED: lookup in root of this folder is deprecated in favour
# of major.minor sub-folders.
dirs_to_search = [self.data_dir]
if openpype_path:
dir_to_search = openpype_path
dirs_to_search = [openpype_path]
elif os.getenv("OPENPYPE_PATH") \
and Path(os.getenv("OPENPYPE_PATH")).exists():
# first try OPENPYPE_PATH and if that is not available,
# try registry.
dirs_to_search = [Path(os.getenv("OPENPYPE_PATH"))]
else:
if os.getenv("OPENPYPE_PATH"):
if Path(os.getenv("OPENPYPE_PATH")).exists():
dir_to_search = Path(os.getenv("OPENPYPE_PATH"))
else:
try:
registry_dir = Path(
str(self.registry.get_item("openPypePath")))
if registry_dir.exists():
dir_to_search = registry_dir
try:
registry_dir = Path(
str(self.registry.get_item("openPypePath")))
if registry_dir.exists():
dirs_to_search = [registry_dir]
except ValueError:
# nothing found in registry, we'll use data dir
pass
except ValueError:
# nothing found in registry, we'll use data dir
pass
openpype_versions = self.get_openpype_versions(dir_to_search, staging)
openpype_versions += user_versions
openpype_versions = []
for dir_to_search in dirs_to_search:
try:
openpype_versions += self.get_openpype_versions(
dir_to_search, staging)
except ValueError:
# location is invalid, skip it
pass
# remove zip file version if needed.
if not include_zips:
openpype_versions = [
v for v in openpype_versions if v.path.suffix != ".zip"
@ -1308,9 +1385,8 @@ class BootstrapRepos:
raise ValueError(
f"version {version} is not associated with any file")
destination = self.data_dir / version.path.stem
if destination.exists():
assert destination.is_dir()
destination = self.data_dir / f"{version.major}.{version.minor}" / version.path.stem # noqa
if destination.exists() and destination.is_dir():
try:
shutil.rmtree(destination)
except OSError as e:
@ -1379,7 +1455,7 @@ class BootstrapRepos:
else:
dir_name = openpype_version.path.stem
destination = self.data_dir / dir_name
destination = self.data_dir / f"{openpype_version.major}.{openpype_version.minor}" / dir_name # noqa
# test if destination directory already exist, if so lets delete it.
if destination.exists() and force:
@ -1557,9 +1633,10 @@ class BootstrapRepos:
return False
return True
def get_openpype_versions(self,
openpype_dir: Path,
staging: bool = False) -> list:
def get_openpype_versions(
self,
openpype_dir: Path,
staging: bool = False) -> list:
"""Get all detected OpenPype versions in directory.
Args:
@ -1574,14 +1651,20 @@ class BootstrapRepos:
"""
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
raise ValueError(f"specified directory {openpype_dir} is invalid")
_openpype_versions = []
openpype_versions = []
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
# if the item is directory with major.minor version, dive deeper
if item.is_dir() and re.match(r"^\d+\.\d+$", item.name):
_versions = self.get_openpype_versions(
item, staging=staging)
if _versions:
openpype_versions += _versions
# if file, strip extension, in case of dir not.
# if it is file, strip extension, in case of dir don't.
name = item.name if item.is_dir() else item.stem
result = OpenPypeVersion.version_in_str(name)
@ -1601,12 +1684,12 @@ class BootstrapRepos:
detected_version.path = item
if staging and detected_version.is_staging():
_openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
if not staging and not detected_version.is_staging():
_openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
return sorted(_openpype_versions)
return sorted(openpype_versions)
class OpenPypeVersionExists(Exception):

View file

@ -62,7 +62,7 @@ class InstallThread(QThread):
progress_callback=self.set_progress, message=self.message)
local_version = OpenPypeVersion.get_installed_version_str()
# if user did entered nothing, we install OpenPype from local version.
# if user did enter nothing, we install OpenPype from local version.
# zip content of `repos`, copy it to user data dir and append
# version to it.
if not self._path:
@ -93,6 +93,23 @@ class InstallThread(QThread):
detected = bs.find_openpype(include_zips=True)
if detected:
if not OpenPypeVersion.get_installed_version().is_compatible(
detected[-1]):
self.message.emit((
f"Latest detected version {detected[-1]} "
"is not compatible with the currently running "
f"{local_version}"
), True)
self.message.emit((
"Filtering detected versions to compatible ones..."
), False)
detected = [
version for version in detected
if version.is_compatible(
OpenPypeVersion.get_installed_version())
]
if OpenPypeVersion(
version=local_version, path=Path()) < detected[-1]:
self.message.emit((

View file

@ -21,6 +21,11 @@ class OpenPypeVersionNotFound(Exception):
pass
class OpenPypeVersionIncompatible(Exception):
"""OpenPype version is not compatible with the installed one (build)."""
pass
def should_add_certificate_path_to_mongo_url(mongo_url):
"""Check if should add ca certificate to mongo url.

View file

@ -9,6 +9,7 @@ from .settings import (
)
from .lib import (
PypeLogger,
Logger,
Anatomy,
config,
execute,
@ -58,8 +59,6 @@ from .action import (
RepairContextAction
)
# for backward compatibility with Pype 2
Logger = PypeLogger
__all__ = [
"get_system_settings",

View file

@ -40,18 +40,6 @@ def settings(dev):
PypeCommands().launch_settings_gui(dev)
@main.command()
def standalonepublisher():
"""Show Pype Standalone publisher UI."""
PypeCommands().launch_standalone_publisher()
@main.command()
def traypublisher():
"""Show new OpenPype Standalone publisher UI."""
PypeCommands().launch_traypublisher()
@main.command()
def tray():
"""Launch pype tray.
@ -443,3 +431,26 @@ def interactive():
__version__, sys.version, sys.platform
)
code.interact(banner)
@main.command()
@click.option("--build", help="Print only build version",
is_flag=True, default=False)
def version(build):
"""Print OpenPype version."""
from openpype.version import __version__
from igniter.bootstrap_repos import BootstrapRepos, OpenPypeVersion
from pathlib import Path
import os
if getattr(sys, 'frozen', False):
local_version = BootstrapRepos.get_version(
Path(os.getenv("OPENPYPE_ROOT")))
else:
local_version = OpenPypeVersion.get_installed_version_str()
if build:
print(local_version)
return
print(f"{__version__} (booted: {local_version})")

View file

@ -1,3 +1,7 @@
from .mongo import (
OpenPypeMongoConnection,
)
from .entities import (
get_projects,
get_project,
@ -25,6 +29,8 @@ from .entities import (
get_last_version_by_subset_name,
get_output_link_versions,
version_is_latest,
get_representation_by_id,
get_representation_by_name,
get_representations,
@ -40,6 +46,8 @@ from .entities import (
)
__all__ = (
"OpenPypeMongoConnection",
"get_projects",
"get_project",
"get_whole_project",
@ -66,6 +74,8 @@ __all__ = (
"get_last_version_by_subset_name",
"get_output_link_versions",
"version_is_latest",
"get_representation_by_id",
"get_representation_by_name",
"get_representations",

View file

@ -6,24 +6,13 @@ that has project name as a context (e.g. on 'ProjectEntity'?).
+ We will need more specific functions doing wery specific queires really fast.
"""
import os
import re
import collections
import six
from bson.objectid import ObjectId
from openpype.lib.mongo import OpenPypeMongoConnection
def _get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
return OpenPypeMongoConnection.get_mongo_client()[db_name]
def _get_project_connection(project_name):
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return _get_project_database()[project_name]
from .mongo import get_project_database, get_project_connection
def _prepare_fields(fields, required_fields=None):
@ -58,7 +47,7 @@ def _convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None):
mongodb = _get_project_database()
mongodb = get_project_database()
for project_name in mongodb.collection_names():
if project_name in ("system.indexes",):
continue
@ -93,7 +82,7 @@ def get_project(project_name, active=True, inactive=False, fields=None):
{"data.active": False},
]
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -108,7 +97,7 @@ def get_whole_project(project_name):
project collection.
"""
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find({})
@ -131,7 +120,7 @@ def get_asset_by_id(project_name, asset_id, fields=None):
return None
query_filter = {"type": "asset", "_id": asset_id}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -153,7 +142,7 @@ def get_asset_by_name(project_name, asset_name, fields=None):
return None
query_filter = {"type": "asset", "name": asset_name}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -223,7 +212,7 @@ def _get_assets(
return []
query_filter["data.visualParent"] = {"$in": parent_ids}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find(query_filter, _prepare_fields(fields))
@ -323,7 +312,7 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
return []
subset_query["parent"] = {"$in": asset_ids}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
result = conn.aggregate([
{
"$match": subset_query
@ -363,7 +352,7 @@ def get_subset_by_id(project_name, subset_id, fields=None):
return None
query_filters = {"type": "subset", "_id": subset_id}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filters, _prepare_fields(fields))
@ -394,7 +383,7 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
"name": subset_name,
"parent": asset_id
}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filters, _prepare_fields(fields))
@ -467,7 +456,7 @@ def get_subsets(
return []
query_filter["$or"] = or_query
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find(query_filter, _prepare_fields(fields))
@ -491,7 +480,7 @@ def get_subset_families(project_name, subset_ids=None):
return set()
subset_filter["_id"] = {"$in": list(subset_ids)}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
result = list(conn.aggregate([
{"$match": subset_filter},
{"$project": {
@ -529,7 +518,7 @@ def get_version_by_id(project_name, version_id, fields=None):
"type": {"$in": ["version", "hero_version"]},
"_id": version_id
}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -552,7 +541,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
if not subset_id:
return None
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
query_filter = {
"type": "version",
"parent": subset_id,
@ -561,6 +550,42 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
return conn.find_one(query_filter, _prepare_fields(fields))
def version_is_latest(project_name, version_id):
"""Is version the latest from it's subset.
Note:
Hero versions are considered as latest.
Todo:
Maybe raise exception when version was not found?
Args:
project_name (str):Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Version id which is checked.
Returns:
bool: True if is latest version from subset else False.
"""
version_id = _convert_id(version_id)
if not version_id:
return False
version_doc = get_version_by_id(
project_name, version_id, fields=["_id", "type", "parent"]
)
# What to do when version is not found?
if not version_doc:
return False
if version_doc["type"] == "hero_version":
return True
last_version = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
return last_version["_id"] == version_id
def _get_versions(
project_name,
subset_ids=None,
@ -606,7 +631,7 @@ def _get_versions(
else:
query_filter["name"] = {"$in": versions}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find(query_filter, _prepare_fields(fields))
@ -765,11 +790,11 @@ def get_output_link_versions(project_name, version_id, fields=None):
if not version_id:
return []
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
# Does make sense to look for hero versions?
query_filter = {
"type": "version",
"data.inputLinks.input": version_id
"data.inputLinks.id": version_id
}
return conn.find(query_filter, _prepare_fields(fields))
@ -830,7 +855,7 @@ def get_last_versions(project_name, subset_ids, fields=None):
{"$group": group_item}
]
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
aggregate_result = conn.aggregate(aggregation_pipeline)
if limit_query:
output = {}
@ -948,7 +973,7 @@ def get_representation_by_id(project_name, representation_id, fields=None):
if representation_id is not None:
query_filter["_id"] = _convert_id(representation_id)
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -981,21 +1006,74 @@ def get_representation_by_name(
"parent": version_id
}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
def _flatten_dict(data):
flatten_queue = collections.deque()
flatten_queue.append(data)
output = {}
while flatten_queue:
item = flatten_queue.popleft()
for key, value in item.items():
if not isinstance(value, dict):
output[key] = value
continue
tmp = {}
for subkey, subvalue in value.items():
new_key = "{}.{}".format(key, subkey)
tmp[new_key] = subvalue
flatten_queue.append(tmp)
return output
def _regex_filters(filters):
output = []
for key, value in filters.items():
regexes = []
a_values = []
if isinstance(value, re.Pattern):
regexes.append(value)
elif isinstance(value, (list, tuple, set)):
for item in value:
if isinstance(item, re.Pattern):
regexes.append(item)
else:
a_values.append(item)
else:
a_values.append(value)
key_filters = []
if len(a_values) == 1:
key_filters.append({key: a_values[0]})
elif a_values:
key_filters.append({key: {"$in": a_values}})
for regex in regexes:
key_filters.append({key: {"$regex": regex}})
if len(key_filters) == 1:
output.append(key_filters[0])
else:
output.append({"$or": key_filters})
return output
def _get_representations(
project_name,
representation_ids,
representation_names,
version_ids,
extensions,
context_filters,
names_by_version_ids,
standard,
archived,
fields
):
default_output = []
repre_types = []
if standard:
repre_types.append("representation")
@ -1003,7 +1081,7 @@ def _get_representations(
repre_types.append("archived_representation")
if not repre_types:
return []
return default_output
if len(repre_types) == 1:
query_filter = {"type": repre_types[0]}
@ -1013,25 +1091,21 @@ def _get_representations(
if representation_ids is not None:
representation_ids = _convert_ids(representation_ids)
if not representation_ids:
return []
return default_output
query_filter["_id"] = {"$in": representation_ids}
if representation_names is not None:
if not representation_names:
return []
return default_output
query_filter["name"] = {"$in": list(representation_names)}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
if not version_ids:
return []
return default_output
query_filter["parent"] = {"$in": version_ids}
if extensions is not None:
if not extensions:
return []
query_filter["context.ext"] = {"$in": list(extensions)}
or_queries = []
if names_by_version_ids is not None:
or_query = []
for version_id, names in names_by_version_ids.items():
@ -1041,10 +1115,38 @@ def _get_representations(
"name": {"$in": list(names)}
})
if not or_query:
return []
query_filter["$or"] = or_query
return default_output
or_queries.append(or_query)
conn = _get_project_connection(project_name)
if context_filters is not None:
if not context_filters:
return []
_flatten_filters = _flatten_dict(context_filters)
flatten_filters = {}
for key, value in _flatten_filters.items():
if not key.startswith("context"):
key = "context.{}".format(key)
flatten_filters[key] = value
for item in _regex_filters(flatten_filters):
for key, value in item.items():
if key != "$or":
query_filter[key] = value
elif value:
or_queries.append(value)
if len(or_queries) == 1:
query_filter["$or"] = or_queries[0]
elif or_queries:
and_query = []
for or_query in or_queries:
if isinstance(or_query, list):
or_query = {"$or": or_query}
and_query.append(or_query)
query_filter["$and"] = and_query
conn = get_project_connection(project_name)
return conn.find(query_filter, _prepare_fields(fields))
@ -1054,7 +1156,7 @@ def get_representations(
representation_ids=None,
representation_names=None,
version_ids=None,
extensions=None,
context_filters=None,
names_by_version_ids=None,
archived=False,
standard=True,
@ -1072,8 +1174,8 @@ def get_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
extensions (Iterable[str]): Filter by extension of main representation
file (without dot).
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
archived (bool): Output will also contain archived representations.
@ -1089,7 +1191,7 @@ def get_representations(
representation_ids=representation_ids,
representation_names=representation_names,
version_ids=version_ids,
extensions=extensions,
context_filters=context_filters,
names_by_version_ids=names_by_version_ids,
standard=True,
archived=archived,
@ -1102,7 +1204,7 @@ def get_archived_representations(
representation_ids=None,
representation_names=None,
version_ids=None,
extensions=None,
context_filters=None,
names_by_version_ids=None,
fields=None
):
@ -1118,8 +1220,8 @@ def get_archived_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
extensions (Iterable[str]): Filter by extension of main representation
file (without dot).
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
fields (Iterable[str]): Fields that should be returned. All fields are
@ -1134,7 +1236,7 @@ def get_archived_representations(
representation_ids=representation_ids,
representation_names=representation_names,
version_ids=version_ids,
extensions=extensions,
context_filters=context_filters,
names_by_version_ids=names_by_version_ids,
standard=False,
archived=True,
@ -1157,58 +1259,64 @@ def get_representations_parents(project_name, representations):
dict[ObjectId, tuple]: Parents by representation id.
"""
repres_by_version_id = collections.defaultdict(list)
versions_by_version_id = {}
versions_by_subset_id = collections.defaultdict(list)
subsets_by_subset_id = {}
subsets_by_asset_id = collections.defaultdict(list)
repre_docs_by_version_id = collections.defaultdict(list)
version_docs_by_version_id = {}
version_docs_by_subset_id = collections.defaultdict(list)
subset_docs_by_subset_id = {}
subset_docs_by_asset_id = collections.defaultdict(list)
output = {}
for representation in representations:
repre_id = representation["_id"]
for repre_doc in representations:
repre_id = repre_doc["_id"]
version_id = repre_doc["parent"]
output[repre_id] = (None, None, None, None)
version_id = representation["parent"]
repres_by_version_id[version_id].append(representation)
repre_docs_by_version_id[version_id].append(repre_doc)
versions = get_versions(
project_name, version_ids=repres_by_version_id.keys()
version_docs = get_versions(
project_name,
version_ids=repre_docs_by_version_id.keys(),
hero=True
)
for version in versions:
version_id = version["_id"]
subset_id = version["parent"]
versions_by_version_id[version_id] = version
versions_by_subset_id[subset_id].append(version)
for version_doc in version_docs:
version_id = version_doc["_id"]
subset_id = version_doc["parent"]
version_docs_by_version_id[version_id] = version_doc
version_docs_by_subset_id[subset_id].append(version_doc)
subsets = get_subsets(
project_name, subset_ids=versions_by_subset_id.keys()
subset_docs = get_subsets(
project_name, subset_ids=version_docs_by_subset_id.keys()
)
for subset in subsets:
subset_id = subset["_id"]
asset_id = subset["parent"]
subsets_by_subset_id[subset_id] = subset
subsets_by_asset_id[asset_id].append(subset)
for subset_doc in subset_docs:
subset_id = subset_doc["_id"]
asset_id = subset_doc["parent"]
subset_docs_by_subset_id[subset_id] = subset_doc
subset_docs_by_asset_id[asset_id].append(subset_doc)
assets = get_assets(project_name, asset_ids=subsets_by_asset_id.keys())
assets_by_id = {
asset["_id"]: asset
for asset in assets
asset_docs = get_assets(
project_name, asset_ids=subset_docs_by_asset_id.keys()
)
asset_docs_by_id = {
asset_doc["_id"]: asset_doc
for asset_doc in asset_docs
}
project = get_project(project_name)
project_doc = get_project(project_name)
for version_id, representations in repres_by_version_id.items():
asset = None
subset = None
version = versions_by_version_id.get(version_id)
if version:
subset_id = version["parent"]
subset = subsets_by_subset_id.get(subset_id)
if subset:
asset_id = subset["parent"]
asset = assets_by_id.get(asset_id)
for version_id, repre_docs in repre_docs_by_version_id.items():
asset_doc = None
subset_doc = None
version_doc = version_docs_by_version_id.get(version_id)
if version_doc:
subset_id = version_doc["parent"]
subset_doc = subset_docs_by_subset_id.get(subset_id)
if subset_doc:
asset_id = subset_doc["parent"]
asset_doc = asset_docs_by_id.get(asset_id)
for representation in representations:
repre_id = representation["_id"]
output[repre_id] = (version, subset, asset, project)
for repre_doc in repre_docs:
repre_id = repre_doc["_id"]
output[repre_id] = (
version_doc, subset_doc, asset_doc, project_doc
)
return output
@ -1255,7 +1363,7 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
query_filter = {"_id": _convert_id(src_id)}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
src_doc = conn.find_one(query_filter, {"data.thumbnail_id"})
if src_doc:
return src_doc.get("data", {}).get("thumbnail_id")
@ -1288,7 +1396,7 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"type": "thumbnail",
"_id": {"$in": thumbnail_ids}
}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find(query_filter, _prepare_fields(fields))
@ -1309,7 +1417,7 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
if not thumbnail_id:
return None
query_filter = {"type": "thumbnail", "_id": _convert_id(thumbnail_id)}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -1340,14 +1448,14 @@ def get_workfile_info(
"task_name": task_name,
"filename": filename
}
conn = _get_project_connection(project_name)
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
"""
## Custom data storage:
- Settings - OP settings overrides and local settings
- Logging - logs from PypeLogger
- Logging - logs from Logger
- Webpublisher - jobs
- Ftrack - events
- Maya - Shaders

235
openpype/client/mongo.py Normal file
View file

@ -0,0 +1,235 @@
import os
import sys
import time
import logging
import pymongo
import certifi
if sys.version_info[0] == 2:
from urlparse import urlparse, parse_qs
else:
from urllib.parse import urlparse, parse_qs
class MongoEnvNotSet(Exception):
pass
def _decompose_url(url):
"""Decompose mongo url to basic components.
Used for creation of MongoHandler which expect mongo url components as
separated kwargs. Components are at the end not used as we're setting
connection directly this is just a dumb components for MongoHandler
validation pass.
"""
# Use first url from passed url
# - this is because it is possible to pass multiple urls for multiple
# replica sets which would crash on urlparse otherwise
# - please don't use comma in username of password
url = url.split(",")[0]
components = {
"scheme": None,
"host": None,
"port": None,
"username": None,
"password": None,
"auth_db": None
}
result = urlparse(url)
if result.scheme is None:
_url = "mongodb://{}".format(url)
result = urlparse(_url)
components["scheme"] = result.scheme
components["host"] = result.hostname
try:
components["port"] = result.port
except ValueError:
raise RuntimeError("invalid port specified")
components["username"] = result.username
components["password"] = result.password
try:
components["auth_db"] = parse_qs(result.query)['authSource'][0]
except KeyError:
# no auth db provided, mongo will use the one we are connecting to
pass
return components
def get_default_components():
mongo_url = os.environ.get("OPENPYPE_MONGO")
if mongo_url is None:
raise MongoEnvNotSet(
"URL for Mongo logging connection is not set."
)
return _decompose_url(mongo_url)
def should_add_certificate_path_to_mongo_url(mongo_url):
"""Check if should add ca certificate to mongo url.
Since 30.9.2021 cloud mongo requires newer certificates that are not
available on most of workstation. This adds path to certifi certificate
which is valid for it. To add the certificate path url must have scheme
'mongodb+srv' or has 'ssl=true' or 'tls=true' in url query.
"""
parsed = urlparse(mongo_url)
query = parse_qs(parsed.query)
lowered_query_keys = set(key.lower() for key in query.keys())
add_certificate = False
# Check if url 'ssl' or 'tls' are set to 'true'
for key in ("ssl", "tls"):
if key in query and "true" in query["ssl"]:
add_certificate = True
break
# Check if url contains 'mongodb+srv'
if not add_certificate and parsed.scheme == "mongodb+srv":
add_certificate = True
# Check if url does already contain certificate path
if add_certificate and "tlscafile" in lowered_query_keys:
add_certificate = False
return add_certificate
def validate_mongo_connection(mongo_uri):
"""Check if provided mongodb URL is valid.
Args:
mongo_uri (str): URL to validate.
Raises:
ValueError: When port in mongo uri is not valid.
pymongo.errors.InvalidURI: If passed mongo is invalid.
pymongo.errors.ServerSelectionTimeoutError: If connection timeout
passed so probably couldn't connect to mongo server.
"""
client = OpenPypeMongoConnection.create_connection(
mongo_uri, retry_attempts=1
)
client.close()
class OpenPypeMongoConnection:
"""Singleton MongoDB connection.
Keeps MongoDB connections by url.
"""
mongo_clients = {}
log = logging.getLogger("OpenPypeMongoConnection")
@staticmethod
def get_default_mongo_url():
return os.environ["OPENPYPE_MONGO"]
@classmethod
def get_mongo_client(cls, mongo_url=None):
if mongo_url is None:
mongo_url = cls.get_default_mongo_url()
connection = cls.mongo_clients.get(mongo_url)
if connection:
# Naive validation of existing connection
try:
connection.server_info()
with connection.start_session():
pass
except Exception:
connection = None
if not connection:
cls.log.debug("Creating mongo connection to {}".format(mongo_url))
connection = cls.create_connection(mongo_url)
cls.mongo_clients[mongo_url] = connection
return connection
@classmethod
def create_connection(cls, mongo_url, timeout=None, retry_attempts=None):
parsed = urlparse(mongo_url)
# Force validation of scheme
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
raise pymongo.errors.InvalidURI((
"Invalid URI scheme:"
" URI must begin with 'mongodb://' or 'mongodb+srv://'"
))
if timeout is None:
timeout = int(os.environ.get("AVALON_TIMEOUT") or 1000)
kwargs = {
"serverSelectionTimeoutMS": timeout
}
if should_add_certificate_path_to_mongo_url(mongo_url):
kwargs["ssl_ca_certs"] = certifi.where()
mongo_client = pymongo.MongoClient(mongo_url, **kwargs)
if retry_attempts is None:
retry_attempts = 3
elif not retry_attempts:
retry_attempts = 1
last_exc = None
valid = False
t1 = time.time()
for attempt in range(1, retry_attempts + 1):
try:
mongo_client.server_info()
with mongo_client.start_session():
pass
valid = True
break
except Exception as exc:
last_exc = exc
if attempt < retry_attempts:
cls.log.warning(
"Attempt {} failed. Retrying... ".format(attempt)
)
time.sleep(1)
if not valid:
raise last_exc
cls.log.info("Connected to {}, delay {:.3f}s".format(
mongo_url, time.time() - t1
))
return mongo_client
def get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
return OpenPypeMongoConnection.get_mongo_client()[db_name]
def get_project_connection(project_name):
"""Direct access to mongo collection.
We're trying to avoid using direct access to mongo. This should be used
only for Create, Update and Remove operations until there are implemented
api calls for that.
Args:
project_name(str): Project name for which collection should be
returned.
Returns:
pymongo.Collection: Collection realated to passed project.
"""
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return get_project_database()[project_name]

39
openpype/client/notes.md Normal file
View file

@ -0,0 +1,39 @@
# Client functionality
## Reason
Preparation for OpenPype v4 server. Goal is to remove direct mongo calls in code to prepare a little bit for different source of data for code before. To start think about database calls less as mongo calls but more universally. To do so was implemented simple wrapper around database calls to not use pymongo specific code.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tighly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
## Queries
Query functions don't use full potential of mongo queries like very specific queries based on subdictionaries or unknown structures. We try to avoid these calls as much as possible because they'll probably won't be available in future. If it's really necessary a new function can be added but only if it's reasonable for overall logic. All query functions were moved to `~/client/entities.py`. Each function has arguments with available filters and possible reduce of returned keys for each entity.
## Changes
Changes are a little bit complicated. Mongo has many options how update can happen which had to be reduced also it would be at this stage complicated to validate values which are created or updated thus automation is at this point almost none. Changes can be made using operations available in `~/client/operations.py`. Each operation require project name and entity type, but may require operation specific data.
### Create
Create operations expect already prepared document data, for that are prepared functions creating skeletal structures of documents (do not fill all required data), except `_id` all data should be right. Existence of entity is not validated so if the same creation operation is send n times it will create the entity n times which can cause issues.
### Update
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementaion.
### Delete
Delete operation need entity id. Entity will be deleted from mongo.
## What (probably) won't be replaced
Some parts of code are still using direct mongo calls. In most of cases it is for very specific calls that are module specific or their usage will completely change in future.
- Mongo calls that are not project specific (out of `avalon` collection) will be removed or will have to use different mechanism how the data are stored. At this moment it is related to OpenPype settings and logs, ftrack server events, some other data.
- Sync server queries. They're complex and very specific for sync server module. Their replacement will require specific calls to OpenPype server in v4 thus their abstraction with wrapper is irrelevant and would complicate production in v3.
- Project managers (ftrack, kitsu, shotgrid, embedded Project Manager, etc.). Project managers are creating, updating or removing assets in v3, but in v4 will create folders with different structure. Wrapping creation of assets would not help to prepare for v4 because of new data structures. The same can be said about editorial Extract Hierarchy Avalon plugin which create project structure.
- Code parts that is marked as deprecated in v3 or will be deprecated in v4.
- integrate asset legacy publish plugin - already is legacy kept for safety
- integrate thumbnail - thumbnails will be stored in different way in v4
- input links - link will be stored in different way and will have different mechanism of linking. In v3 are links limited to same entity type "asset <-> asset" or "representation <-> representation".
## Known missing replacements
- change subset group in loader tool
- integrate subset group
- query input links in openpype lib
- create project in openpype lib
- save/create workfile doc in openpype lib
- integrate hero version

View file

@ -0,0 +1,640 @@
import re
import uuid
import copy
import collections
from abc import ABCMeta, abstractmethod, abstractproperty
import six
from bson.objectid import ObjectId
from pymongo import DeleteOne, InsertOne, UpdateOne
from .mongo import get_project_connection
REMOVED_VALUE = object()
PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_"
PROJECT_NAME_REGEX = re.compile(
"^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS)
)
CURRENT_PROJECT_SCHEMA = "openpype:project-3.0"
CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0"
CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0"
CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
def _create_or_convert_to_mongo_id(mongo_id):
if mongo_id is None:
return ObjectId()
return ObjectId(mongo_id)
def new_project_document(
project_name, project_code, config, data=None, entity_id=None
):
"""Create skeleton data of project document.
Args:
project_name (str): Name of project. Used as identifier of a project.
project_code (str): Shorter version of projet without spaces and
special characters (in most of cases). Should be also considered
as unique name across projects.
config (Dic[str, Any]): Project config consist of roots, templates,
applications and other project Anatomy related data.
data (Dict[str, Any]): Project data with information about it's
attributes (e.g. 'fps' etc.) or integration specific keys.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of project document.
"""
if data is None:
data = {}
data["code"] = project_code
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"name": project_name,
"type": CURRENT_PROJECT_SCHEMA,
"entity_data": data,
"config": config
}
def new_asset_document(
name, project_id, parent_id, parents, data=None, entity_id=None
):
"""Create skeleton data of asset document.
Args:
name (str): Is considered as unique identifier of asset in project.
project_id (Union[str, ObjectId]): Id of project doument.
parent_id (Union[str, ObjectId]): Id of parent asset.
parents (List[str]): List of parent assets names.
data (Dict[str, Any]): Asset document data. Empty dictionary is used
if not passed. Value of 'parent_id' is used to fill 'visualParent'.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of asset document.
"""
if data is None:
data = {}
if parent_id is not None:
parent_id = ObjectId(parent_id)
data["visualParent"] = parent_id
data["parents"] = parents
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "asset",
"name": name,
"parent": ObjectId(project_id),
"data": data,
"schema": CURRENT_ASSET_DOC_SCHEMA
}
def new_subset_document(name, family, asset_id, data=None, entity_id=None):
"""Create skeleton data of subset document.
Args:
name (str): Is considered as unique identifier of subset under asset.
family (str): Subset's family.
asset_id (Union[str, ObjectId]): Id of parent asset.
data (Dict[str, Any]): Subset document data. Empty dictionary is used
if not passed. Value of 'family' is used to fill 'family'.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of subset document.
"""
if data is None:
data = {}
data["family"] = family
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_SUBSET_SCHEMA,
"type": "subset",
"name": name,
"data": data,
"parent": asset_id
}
def new_version_doc(version, subset_id, data=None, entity_id=None):
"""Create skeleton data of version document.
Args:
version (int): Is considered as unique identifier of version
under subset.
subset_id (Union[str, ObjectId]): Id of parent subset.
data (Dict[str, Any]): Version document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_VERSION_SCHEMA,
"type": "version",
"name": int(version),
"parent": subset_id,
"data": data
}
def new_representation_doc(
name, version_id, context, data=None, entity_id=None
):
"""Create skeleton data of asset document.
Args:
version (int): Is considered as unique identifier of version
under subset.
version_id (Union[str, ObjectId]): Id of parent version.
context (Dict[str, Any]): Representation context used for fill template
of to query.
data (Dict[str, Any]): Representation document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_REPRESENTATION_SCHEMA,
"type": "representation",
"parent": version_id,
"name": name,
"data": data,
# Imprint shortcut to context for performance reasons.
"context": context
}
def new_workfile_info_doc(
filename, asset_id, task_name, files, data=None, entity_id=None
):
"""Create skeleton data of workfile info document.
Workfile document is at this moment used primarily for artist notes.
Args:
filename (str): Filename of workfile.
asset_id (Union[str, ObjectId]): Id of asset under which workfile live.
task_name (str): Task under which was workfile created.
files (List[str]): List of rootless filepaths related to workfile.
data (Dict[str, Any]): Additional metadata.
Returns:
Dict[str, Any]: Skeleton of workfile info document.
"""
if not data:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "workfile",
"parent": ObjectId(asset_id),
"task_name": task_name,
"filename": filename,
"data": data,
"files": files
}
def _prepare_update_data(old_doc, new_doc, replace):
changes = {}
for key, value in new_doc.items():
if key not in old_doc or value != old_doc[key]:
changes[key] = value
if replace:
for key in old_doc.keys():
if key not in new_doc:
changes[key] = REMOVED_VALUE
return changes
def prepare_subset_update_data(old_doc, new_doc, replace=True):
"""Compare two subset documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_version_update_data(old_doc, new_doc, replace=True):
"""Compare two version documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_representation_update_data(old_doc, new_doc, replace=True):
"""Compare two representation documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
"""Compare two workfile info documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
@six.add_metaclass(ABCMeta)
class AbstractOperation(object):
"""Base operation class.
Opration represent a call into database. The call can create, change or
remove data.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
"""
def __init__(self, project_name, entity_type):
self._project_name = project_name
self._entity_type = entity_type
self._id = str(uuid.uuid4())
@property
def project_name(self):
return self._project_name
@property
def id(self):
"""Identifier of operation."""
return self._id
@property
def entity_type(self):
return self._entity_type
@abstractproperty
def operation_name(self):
"""Stringified type of operation."""
pass
@abstractmethod
def to_mongo_operation(self):
"""Convert operation to Mongo batch operation."""
pass
def to_data(self):
"""Convert opration to data that can be converted to json or others.
Warning:
Current state returns ObjectId objects which cannot be parsed by
json.
Returns:
Dict[str, Any]: Description of operation.
"""
return {
"id": self._id,
"entity_type": self.entity_type,
"project_name": self.project_name,
"operation": self.operation_name
}
class CreateOperation(AbstractOperation):
"""Opeartion to create an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
data (Dict[str, Any]): Data of entity that will be created.
"""
operation_name = "create"
def __init__(self, project_name, entity_type, data):
super(CreateOperation, self).__init__(project_name, entity_type)
if not data:
data = {}
else:
data = copy.deepcopy(dict(data))
if "_id" not in data:
data["_id"] = ObjectId()
else:
data["_id"] = ObjectId(data["_id"])
self._entity_id = data["_id"]
self._data = data
def __setitem__(self, key, value):
self.set_value(key, value)
def __getitem__(self, key):
return self.data[key]
def set_value(self, key, value):
self.data[key] = value
def get(self, key, *args, **kwargs):
return self.data.get(key, *args, **kwargs)
@property
def entity_id(self):
return self._entity_id
@property
def data(self):
return self._data
def to_mongo_operation(self):
return InsertOne(copy.deepcopy(self._data))
def to_data(self):
output = super(CreateOperation, self).to_data()
output["data"] = copy.deepcopy(self.data)
return output
class UpdateOperation(AbstractOperation):
"""Opeartion to update an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
entity_id (Union[str, ObjectId]): Identifier of an entity.
update_data (Dict[str, Any]): Key -> value changes that will be set in
database. If value is set to 'REMOVED_VALUE' the key will be
removed. Only first level of dictionary is checked (on purpose).
"""
operation_name = "update"
def __init__(self, project_name, entity_type, entity_id, update_data):
super(UpdateOperation, self).__init__(project_name, entity_type)
self._entity_id = ObjectId(entity_id)
self._update_data = update_data
@property
def entity_id(self):
return self._entity_id
@property
def update_data(self):
return self._update_data
def to_mongo_operation(self):
unset_data = {}
set_data = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
unset_data[key] = None
else:
set_data[key] = value
op_data = {}
if unset_data:
op_data["$unset"] = unset_data
if set_data:
op_data["$set"] = set_data
if not op_data:
return None
return UpdateOne(
{"_id": self.entity_id},
op_data
)
def to_data(self):
changes = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
value = None
changes[key] = value
output = super(UpdateOperation, self).to_data()
output.update({
"entity_id": self.entity_id,
"changes": changes
})
return output
class DeleteOperation(AbstractOperation):
"""Opeartion to delete an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
entity_id (Union[str, ObjectId]): Entity id that will be removed.
"""
operation_name = "delete"
def __init__(self, project_name, entity_type, entity_id):
super(DeleteOperation, self).__init__(project_name, entity_type)
self._entity_id = ObjectId(entity_id)
@property
def entity_id(self):
return self._entity_id
def to_mongo_operation(self):
return DeleteOne({"_id": self.entity_id})
def to_data(self):
output = super(DeleteOperation, self).to_data()
output["entity_id"] = self.entity_id
return output
class OperationsSession(object):
"""Session storing operations that should happen in an order.
At this moment does not handle anything special can be sonsidered as
stupid list of operations that will happen after each other. If creation
of same entity is there multiple times it's handled in any way and document
values are not validated.
All operations must be related to single project.
Args:
project_name (str): Project name to which are operations related.
"""
def __init__(self):
self._operations = []
def add(self, operation):
"""Add operation to be processed.
Args:
operation (BaseOperation): Operation that should be processed.
"""
if not isinstance(
operation,
(CreateOperation, UpdateOperation, DeleteOperation)
):
raise TypeError("Expected Operation object got {}".format(
str(type(operation))
))
self._operations.append(operation)
def append(self, operation):
"""Add operation to be processed.
Args:
operation (BaseOperation): Operation that should be processed.
"""
self.add(operation)
def extend(self, operations):
"""Add operations to be processed.
Args:
operations (List[BaseOperation]): Operations that should be
processed.
"""
for operation in operations:
self.add(operation)
def remove(self, operation):
"""Remove operation."""
self._operations.remove(operation)
def clear(self):
"""Clear all registered operations."""
self._operations = []
def to_data(self):
return [
operation.to_data()
for operation in self._operations
]
def commit(self):
"""Commit session operations."""
operations, self._operations = self._operations, []
if not operations:
return
operations_by_project = collections.defaultdict(list)
for operation in operations:
operations_by_project[operation.project_name].append(operation)
for project_name, operations in operations_by_project.items():
bulk_writes = []
for operation in operations:
mongo_op = operation.to_mongo_operation()
if mongo_op is not None:
bulk_writes.append(mongo_op)
if bulk_writes:
collection = get_project_connection(project_name)
collection.bulk_write(bulk_writes)
def create_entity(self, project_name, entity_type, data):
"""Fast access to 'CreateOperation'.
Returns:
CreateOperation: Object of update operation.
"""
operation = CreateOperation(project_name, entity_type, data)
self.add(operation)
return operation
def update_entity(self, project_name, entity_type, entity_id, update_data):
"""Fast access to 'UpdateOperation'.
Returns:
UpdateOperation: Object of update operation.
"""
operation = UpdateOperation(
project_name, entity_type, entity_id, update_data
)
self.add(operation)
return operation
def delete_entity(self, project_name, entity_type, entity_id):
"""Fast access to 'DeleteOperation'.
Returns:
DeleteOperation: Object of delete operation.
"""
operation = DeleteOperation(project_name, entity_type, entity_id)
self.add(operation)
return operation

View file

@ -1,11 +1,11 @@
import os
import shutil
from openpype.lib import (
PreLaunchHook,
get_custom_workfile_template_by_context,
from openpype.lib import PreLaunchHook
from openpype.settings import get_project_settings
from openpype.pipeline.workfile import (
get_custom_workfile_template,
get_custom_workfile_template_by_string_context
)
from openpype.settings import get_project_settings
class CopyTemplateWorkfile(PreLaunchHook):
@ -54,41 +54,22 @@ class CopyTemplateWorkfile(PreLaunchHook):
project_name = self.data["project_name"]
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
host_name = self.application.host_name
project_settings = get_project_settings(project_name)
host_settings = project_settings[self.application.host_name]
workfile_builder_settings = host_settings.get("workfile_builder")
if not workfile_builder_settings:
# TODO remove warning when deprecated
self.log.warning((
"Seems like old version of settings is used."
" Can't access custom templates in host \"{}\"."
).format(self.application.full_label))
return
if not workfile_builder_settings["create_first_version"]:
self.log.info((
"Project \"{}\" has turned off to create first workfile for"
" application \"{}\""
).format(project_name, self.application.full_label))
return
# Backwards compatibility
template_profiles = workfile_builder_settings.get("custom_templates")
if not template_profiles:
self.log.info(
"Custom templates are not filled. Skipping template copy."
)
return
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
if project_doc and asset_doc:
self.log.debug("Started filtering of custom template paths.")
template_path = get_custom_workfile_template_by_context(
template_profiles, project_doc, asset_doc, task_name, anatomy
template_path = get_custom_workfile_template(
project_doc,
asset_doc,
task_name,
host_name,
anatomy,
project_settings
)
else:
@ -96,10 +77,13 @@ class CopyTemplateWorkfile(PreLaunchHook):
"Global data collection probably did not execute."
" Using backup solution."
))
dbcon = self.data.get("dbcon")
template_path = get_custom_workfile_template_by_string_context(
template_profiles, project_name, asset_name, task_name,
dbcon, anatomy
project_name,
asset_name,
task_name,
host_name,
anatomy,
project_settings
)
if not template_path:

View file

@ -1,3 +1,4 @@
from openpype.client import get_project, get_asset_by_name
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
@ -69,7 +70,7 @@ class GlobalHostDataHook(PreLaunchHook):
self.data["dbcon"] = dbcon
# Project document
project_doc = dbcon.find_one({"type": "project"})
project_doc = get_project(project_name)
self.data["project_doc"] = project_doc
asset_name = self.data.get("asset_name")
@ -79,8 +80,5 @@ class GlobalHostDataHook(PreLaunchHook):
)
return
asset_doc = dbcon.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
self.data["asset_doc"] = asset_doc

View file

@ -19,8 +19,15 @@ class MissingMethodsError(ValueError):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
if isinstance(host, HostBase):
host_name = host.name
else:
try:
host_name = host.__file__.replace("\\", "/").split("/")[-3]
except Exception:
host_name = str(host)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
"Host \"{}\" miss methods {}".format(host_name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)

View file

@ -1,9 +1,6 @@
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True",
"WEBSOCKET_URL": "ws://localhost:8097/ws/"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
from .addon import AfterEffectsAddon
__all__ = (
"AfterEffectsAddon",
)

View file

@ -0,0 +1,23 @@
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
class AfterEffectsAddon(OpenPypeModule, IHostAddon):
name = "aftereffects"
host_name = "aftereffects"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True",
"WEBSOCKET_URL": "ws://localhost:8097/ws/"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_workfile_extensions(self):
return [".aep"]

View file

@ -1,13 +1,16 @@
import os
import sys
import re
import json
import contextlib
import traceback
import logging
from functools import partial
from Qt import QtWidgets
from openpype.pipeline import install_host
from openpype.lib.remote_publish import headless_publish
from openpype.modules import ModulesManager
from openpype.tools.utils import host_tools
from .launch_logic import ProcessLauncher, get_stub
@ -35,10 +38,18 @@ def main(*subprocess_args):
launcher.start()
if os.environ.get("HEADLESS_PUBLISH"):
launcher.execute_in_main_thread(lambda: headless_publish(
log,
"CloseAE",
os.environ.get("IS_TEST")))
manager = ModulesManager()
webpublisher_addon = manager["webpublisher"]
launcher.execute_in_main_thread(
partial(
webpublisher_addon.headless_publish,
log,
"CloseAE",
os.environ.get("IS_TEST")
)
)
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
save = False
if os.getenv("WORKFILES_SAVE_AS"):
@ -68,3 +79,57 @@ def get_extension_manifest_path():
"CSXS",
"manifest.xml"
)
def get_unique_layer_name(layers, name):
"""
Gets all layer names and if 'name' is present in them, increases
suffix by 1 (eg. creates unique layer name - for Loader)
Args:
layers (list): of strings, names only
name (string): checked value
Returns:
(string): name_00X (without version)
"""
names = {}
for layer in layers:
layer_name = re.sub(r'_\d{3}$', '', layer)
if layer_name in names.keys():
names[layer_name] = names[layer_name] + 1
else:
names[layer_name] = 1
occurrences = names.get(name, 0)
return "{}_{:0>3d}".format(name, occurrences + 1)
def get_background_layers(file_url):
"""
Pulls file name from background json file, enrich with folder url for
AE to be able import files.
Order is important, follows order in json.
Args:
file_url (str): abs url of background json
Returns:
(list): of abs paths to images
"""
with open(file_url) as json_file:
data = json.load(json_file)
layers = list()
bg_folder = os.path.dirname(file_url)
for child in data['children']:
if child.get("filename"):
layers.append(os.path.join(bg_folder, child.get("filename")).
replace("\\", "/"))
else:
for layer in child['children']:
if layer.get("filename"):
layers.append(os.path.join(bg_folder,
layer.get("filename")).
replace("\\", "/"))
return layers

View file

@ -1,5 +1,4 @@
import os
import sys
from Qt import QtWidgets
@ -15,6 +14,7 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
legacy_io,
)
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.aftereffects
from openpype.lib import register_event_callback
@ -136,7 +136,7 @@ def ls():
def check_inventory():
"""Checks loaded containers if they are of highest version"""
if not lib.any_outdated():
if not any_outdated_containers():
return
# Warn about outdated containers.

View file

@ -1,12 +1,11 @@
"""Host API required Work Files tool"""
import os
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .launch_logic import get_stub
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["aftereffects"]
return [".aep"]
def has_unsaved_changes():

View file

@ -11,6 +11,8 @@ class AEWorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
def get_instance_attr_defs(self):
return []
@ -35,7 +37,6 @@ class AEWorkfileCreator(AutoCreator):
existing_instance = instance
break
variant = ''
project_name = legacy_io.Session["AVALON_PROJECT"]
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
@ -44,15 +45,17 @@ class AEWorkfileCreator(AutoCreator):
if existing_instance is None:
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
data = {
"asset": asset_name,
"task": task_name,
"variant": variant
"variant": self.default_variant
}
data.update(self.get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
))
new_instance = CreatedInstance(
@ -69,7 +72,9 @@ class AEWorkfileCreator(AutoCreator):
):
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name

View file

@ -1,14 +1,14 @@
import re
from openpype.lib import (
get_background_layers,
get_unique_layer_name
)
from openpype.pipeline import get_representation_path
from openpype.hosts.aftereffects.api import (
AfterEffectsLoader,
containerise
)
from openpype.hosts.aftereffects.api.lib import (
get_background_layers,
get_unique_layer_name,
)
class BackgroundLoader(AfterEffectsLoader):

View file

@ -1,12 +1,11 @@
import re
from openpype import lib
from openpype.pipeline import get_representation_path
from openpype.hosts.aftereffects.api import (
AfterEffectsLoader,
containerise
)
from openpype.hosts.aftereffects.api.lib import get_unique_layer_name
class FileLoader(AfterEffectsLoader):
@ -28,7 +27,7 @@ class FileLoader(AfterEffectsLoader):
stub = self.get_stub()
layers = stub.get_items(comps=True, folders=True, footages=True)
existing_layers = [layer.name for layer in layers]
comp_name = lib.get_unique_layer_name(
comp_name = get_unique_layer_name(
existing_layers, "{}_{}".format(context["asset"]["name"], name))
import_options = {}
@ -87,7 +86,7 @@ class FileLoader(AfterEffectsLoader):
if namespace_from_container != layer_name:
layers = stub.get_items(comps=True)
existing_layers = [layer.name for layer in layers]
layer_name = lib.get_unique_layer_name(
layer_name = get_unique_layer_name(
existing_layers,
"{}_{}".format(context["asset"], context["subset"]))
else: # switching version - keep same name

View file

@ -102,7 +102,6 @@ class CollectAERender(publish.AbstractCollectRender):
attachTo=False,
setMembers='',
publish=True,
renderer='aerender',
name=subset_name,
resolutionWidth=render_q.width,
resolutionHeight=render_q.height,
@ -113,7 +112,6 @@ class CollectAERender(publish.AbstractCollectRender):
frameStart=frame_start,
frameEnd=frame_end,
frameStep=1,
toBeRenderedOn='deadline',
fps=fps,
app_version=app_version,
publish_attributes=inst.data.get("publish_attributes", {}),
@ -138,6 +136,9 @@ class CollectAERender(publish.AbstractCollectRender):
fam = "render.farm"
if fam not in instance.families:
instance.families.append(fam)
instance.toBeRenderedOn = "deadline"
instance.renderer = "aerender"
instance.farm = True # to skip integrate
instances.append(instance)
instances_to_remove.append(inst)

View file

@ -1,8 +1,8 @@
import os
import pyblish.api
from openpype.lib import get_subset_name_with_asset_doc
from openpype.pipeline import legacy_io
from openpype.pipeline.create import get_subset_name
class CollectWorkfile(pyblish.api.ContextPlugin):
@ -11,6 +11,8 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
label = "Collect After Effects Workfile Instance"
order = pyblish.api.CollectorOrder + 0.1
default_variant = "Main"
def process(self, context):
existing_instance = None
for instance in context:
@ -69,13 +71,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
# workfile instance
family = "workfile"
subset = get_subset_name_with_asset_doc(
subset = get_subset_name(
family,
"",
self.default_variant,
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],
host_name=context.data["hostName"]
host_name=context.data["hostName"],
project_settings=context.data["project_settings"]
)
# Create instance
instance = context.create_instance(subset)

View file

@ -1,52 +1,6 @@
import os
from .addon import BlenderAddon
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
# Prepare path to implementation script
implementation_user_script_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"blender_addon"
)
# Add blender implementation script path to PYTHONPATH
python_path = env.get("PYTHONPATH") or ""
python_path_parts = [
path
for path in python_path.split(os.pathsep)
if path
]
python_path_parts.insert(0, implementation_user_script_path)
env["PYTHONPATH"] = os.pathsep.join(python_path_parts)
# Modify Blender user scripts path
previous_user_scripts = set()
# Implementation path is added to set for easier paths check inside loops
# - will be removed at the end
previous_user_scripts.add(implementation_user_script_path)
openpype_blender_user_scripts = (
env.get("OPENPYPE_BLENDER_USER_SCRIPTS") or ""
)
for path in openpype_blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
blender_user_scripts = env.get("BLENDER_USER_SCRIPTS") or ""
for path in blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
# Remove implementation path from user script paths as is set to
# `BLENDER_USER_SCRIPTS`
previous_user_scripts.remove(implementation_user_script_path)
env["BLENDER_USER_SCRIPTS"] = implementation_user_script_path
# Set custom user scripts env
env["OPENPYPE_BLENDER_USER_SCRIPTS"] = os.pathsep.join(
previous_user_scripts
)
# Define Qt binding if not defined
if not env.get("QT_PREFERRED_BINDING"):
env["QT_PREFERRED_BINDING"] = "PySide2"
__all__ = (
"BlenderAddon",
)

View file

@ -0,0 +1,73 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
BLENDER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class BlenderAddon(OpenPypeModule, IHostAddon):
name = "blender"
host_name = "blender"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
# Prepare path to implementation script
implementation_user_script_path = os.path.join(
BLENDER_ROOT_DIR,
"blender_addon"
)
# Add blender implementation script path to PYTHONPATH
python_path = env.get("PYTHONPATH") or ""
python_path_parts = [
path
for path in python_path.split(os.pathsep)
if path
]
python_path_parts.insert(0, implementation_user_script_path)
env["PYTHONPATH"] = os.pathsep.join(python_path_parts)
# Modify Blender user scripts path
previous_user_scripts = set()
# Implementation path is added to set for easier paths check inside
# loops - will be removed at the end
previous_user_scripts.add(implementation_user_script_path)
openpype_blender_user_scripts = (
env.get("OPENPYPE_BLENDER_USER_SCRIPTS") or ""
)
for path in openpype_blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
blender_user_scripts = env.get("BLENDER_USER_SCRIPTS") or ""
for path in blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
# Remove implementation path from user script paths as is set to
# `BLENDER_USER_SCRIPTS`
previous_user_scripts.remove(implementation_user_script_path)
env["BLENDER_USER_SCRIPTS"] = implementation_user_script_path
# Set custom user scripts env
env["OPENPYPE_BLENDER_USER_SCRIPTS"] = os.pathsep.join(
previous_user_scripts
)
# Define Qt binding if not defined
if not env.get("QT_PREFERRED_BINDING"):
env["QT_PREFERRED_BINDING"] = "PySide2"
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(BLENDER_ROOT_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".blend"]

View file

@ -234,7 +234,7 @@ def lsattrs(attrs: Dict) -> List:
def read(node: bpy.types.bpy_struct_meta_idprop):
"""Return user-defined attributes from `node`"""
data = dict(node.get(pipeline.AVALON_PROPERTY))
data = dict(node.get(pipeline.AVALON_PROPERTY, {}))
# Ignore hidden/internal data
data = {

View file

@ -26,7 +26,7 @@ PREVIEW_COLLECTIONS: Dict = dict()
# This seems like a good value to keep the Qt app responsive and doesn't slow
# down Blender. At least on macOS I the interace of Blender gets very laggy if
# you make it smaller.
TIMER_INTERVAL: float = 0.01
TIMER_INTERVAL: float = 0.01 if platform.system() == "Windows" else 0.1
class BlenderApplication(QtWidgets.QApplication):
@ -164,6 +164,12 @@ def _process_app_events() -> Optional[float]:
dialog.setDetailedText(detail)
dialog.exec_()
# Refresh Manager
if GlobalClass.app:
manager = GlobalClass.app.get_window("WM_OT_avalon_manager")
if manager:
manager.refresh()
if not GlobalClass.is_windows:
if OpenFileCacher.opening_file:
return TIMER_INTERVAL
@ -192,10 +198,11 @@ class LaunchQtApp(bpy.types.Operator):
self._app = BlenderApplication.get_app()
GlobalClass.app = self._app
bpy.app.timers.register(
_process_app_events,
persistent=True
)
if not bpy.app.timers.is_registered(_process_app_events):
bpy.app.timers.register(
_process_app_events,
persistent=True
)
def execute(self, context):
"""Execute the operator.
@ -220,12 +227,9 @@ class LaunchQtApp(bpy.types.Operator):
self._app.store_window(self.bl_idname, window)
self._window = window
if not isinstance(
self._window,
(QtWidgets.QMainWindow, QtWidgets.QDialog, ModuleType)
):
if not isinstance(self._window, (QtWidgets.QWidget, ModuleType)):
raise AttributeError(
"`window` should be a `QDialog or module`. Got: {}".format(
"`window` should be a `QWidget or module`. Got: {}".format(
str(type(window))
)
)
@ -249,9 +253,9 @@ class LaunchQtApp(bpy.types.Operator):
self._window.setWindowFlags(on_top_flags)
self._window.show()
if on_top_flags != origin_flags:
self._window.setWindowFlags(origin_flags)
self._window.show()
# if on_top_flags != origin_flags:
# self._window.setWindowFlags(origin_flags)
# self._window.show()
return {'FINISHED'}

View file

@ -5,8 +5,6 @@ from typing import List, Optional
import bpy
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
class OpenFileCacher:
"""Store information about opening file.
@ -78,7 +76,7 @@ def has_unsaved_changes() -> bool:
def file_extensions() -> List[str]:
"""Return the supported file extensions for Blender scene files."""
return HOST_WORKFILE_EXTENSIONS["blender"]
return [".blend"]
def work_root(session: dict) -> str:

View file

@ -1,4 +1,10 @@
from openpype.pipeline import install_host
from openpype.hosts.blender import api
install_host(api)
def register():
install_host(api)
def unregister():
pass

View file

@ -6,12 +6,12 @@ from typing import Dict, List, Optional
import bpy
from openpype import lib
from openpype.pipeline import (
legacy_create,
get_representation_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.create import get_legacy_creator_by_name
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
@ -157,7 +157,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
t.id = local_obj
elif local_obj.type == 'EMPTY':
creator_plugin = lib.get_creator_by_name("CreateAnimation")
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateAnimation\" was "
"not found.")

View file

@ -118,7 +118,7 @@ class JsonLayoutLoader(plugin.AssetLoader):
# Camera creation when loading a layout is not necessary for now,
# but the code is worth keeping in case we need it in the future.
# # Create the camera asset and the camera instance
# creator_plugin = lib.get_creator_by_name("CreateCamera")
# creator_plugin = get_legacy_creator_by_name("CreateCamera")
# if not creator_plugin:
# raise ValueError("Creator plugin \"CreateCamera\" was "
# "not found.")

View file

@ -6,12 +6,12 @@ from typing import Dict, List, Optional
import bpy
from openpype import lib
from openpype.pipeline import (
legacy_create,
get_representation_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.create import get_legacy_creator_by_name
from openpype.hosts.blender.api import (
plugin,
get_selection,
@ -244,7 +244,7 @@ class BlendRigLoader(plugin.AssetLoader):
objects = self._process(libpath, asset_group, group_name, action)
if create_animation:
creator_plugin = lib.get_creator_by_name("CreateAnimation")
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateAnimation\" was "
"not found.")

View file

@ -180,7 +180,7 @@ class ExtractLayout(openpype.api.Extractor):
"rotation": {
"x": asset.rotation_euler.x,
"y": asset.rotation_euler.y,
"z": asset.rotation_euler.z,
"z": asset.rotation_euler.z
},
"scale": {
"x": asset.scale.x,
@ -189,6 +189,18 @@ class ExtractLayout(openpype.api.Extractor):
}
}
json_element["transform_matrix"] = []
for row in list(asset.matrix_world.transposed()):
json_element["transform_matrix"].append(list(row))
json_element["basis"] = [
[1, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
]
# Extract the animation as well
if family == "rig":
f, n = self._export_animation(

View file

@ -14,7 +14,7 @@ from openpype.tools.utils import host_tools
from openpype.pipeline import install_openpype_plugins
log = Logger().get_logger("Celaction_cli_publisher")
log = Logger.get_logger("Celaction_cli_publisher")
publish_host = "celaction"

View file

@ -30,7 +30,8 @@ from .lib import (
maintained_temp_file_path,
get_clip_segment,
get_batch_group_from_desktop,
MediaInfoFile
MediaInfoFile,
TimeEffectMetadata
)
from .utils import (
setup,
@ -107,6 +108,7 @@ __all__ = [
"get_clip_segment",
"get_batch_group_from_desktop",
"MediaInfoFile",
"TimeEffectMetadata",
# pipeline
"install",

View file

@ -5,10 +5,11 @@ import json
import pickle
import clique
import tempfile
import traceback
import itertools
import contextlib
import xml.etree.cElementTree as cET
from copy import deepcopy
from copy import deepcopy, copy
from xml.etree import ElementTree as ET
from pprint import pformat
from .constants import (
@ -266,7 +267,7 @@ def get_current_sequence(selection):
def rescan_hooks():
import flame
try:
flame.execute_shortcut('Rescan Python Hooks')
flame.execute_shortcut("Rescan Python Hooks")
except Exception:
pass
@ -1082,21 +1083,21 @@ class MediaInfoFile(object):
xml_data (ET.Element): clip data
"""
try:
for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'):
for out_track in xml_data.iter("track"):
for out_feed in out_track.iter("feed"):
# start frame
out_feed_nb_ticks_obj = out_feed.find(
'startTimecode/nbTicks')
"startTimecode/nbTicks")
self.start_frame = out_feed_nb_ticks_obj.text
# fps
out_feed_fps_obj = out_feed.find(
'startTimecode/rate')
"startTimecode/rate")
self.fps = out_feed_fps_obj.text
# drop frame mode
out_feed_drop_mode_obj = out_feed.find(
'startTimecode/dropMode')
"startTimecode/dropMode")
self.drop_mode = out_feed_drop_mode_obj.text
break
except Exception as msg:
@ -1118,8 +1119,153 @@ class MediaInfoFile(object):
tree = cET.ElementTree(xml_element_data)
tree.write(
fpath, xml_declaration=True,
method='xml', encoding='UTF-8'
method="xml", encoding="UTF-8"
)
except IOError as error:
raise IOError(
"Not able to write data to file: {}".format(error))
class TimeEffectMetadata(object):
log = log
_data = {}
_retime_modes = {
0: "speed",
1: "timewarp",
2: "duration"
}
def __init__(self, segment, logger=None):
if logger:
self.log = logger
self._data = self._get_metadata(segment)
@property
def data(self):
""" Returns timewarp effect data
Returns:
dict: retime data
"""
return self._data
def _get_metadata(self, segment):
effects = segment.effects or []
for effect in effects:
if effect.type == "Timewarp":
with maintained_temp_file_path(".timewarp_node") as tmp_path:
self.log.info("Temp File: {}".format(tmp_path))
effect.save_setup(tmp_path)
return self._get_attributes_from_xml(tmp_path)
return {}
def _get_attributes_from_xml(self, tmp_path):
with open(tmp_path, "r") as tw_setup_file:
tw_setup_string = tw_setup_file.read()
tw_setup_file.close()
tw_setup_xml = ET.fromstring(tw_setup_string)
tw_setup = self._dictify(tw_setup_xml)
# pprint(tw_setup)
try:
tw_setup_state = tw_setup["Setup"]["State"][0]
mode = int(
tw_setup_state["TW_RetimerMode"][0]["_text"]
)
r_data = {
"type": self._retime_modes[mode],
"effectStart": int(
tw_setup["Setup"]["Base"][0]["Range"][0]["Start"]),
"effectEnd": int(
tw_setup["Setup"]["Base"][0]["Range"][0]["End"])
}
if mode == 0: # speed
r_data[self._retime_modes[mode]] = float(
tw_setup_state["TW_Speed"]
[0]["Channel"][0]["Value"][0]["_text"]
) / 100
elif mode == 1: # timewarp
print("timing")
r_data[self._retime_modes[mode]] = self._get_anim_keys(
tw_setup_state["TW_Timing"]
)
elif mode == 2: # duration
r_data[self._retime_modes[mode]] = {
"start": {
"source": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][0]["Value"][0]["_text"]
),
"timeline": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][0]["Frame"][0]["_text"]
)
},
"end": {
"source": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][1]["Value"][0]["_text"]
),
"timeline": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][1]["Frame"][0]["_text"]
)
}
}
except Exception:
lines = traceback.format_exception(*sys.exc_info())
self.log.error("\n".join(lines))
return
return r_data
def _get_anim_keys(self, setup_cat, index=None):
return_data = {
"extrapolation": (
setup_cat[0]["Channel"][0]["Extrap"][0]["_text"]
),
"animKeys": []
}
for key in setup_cat[0]["Channel"][0]["KFrames"][0]["Key"]:
if index and int(key["Index"]) != index:
continue
key_data = {
"source": float(key["Value"][0]["_text"]),
"timeline": float(key["Frame"][0]["_text"]),
"index": int(key["Index"]),
"curveMode": key["CurveMode"][0]["_text"],
"curveOrder": key["CurveOrder"][0]["_text"]
}
if key.get("TangentMode"):
key_data["tangentMode"] = key["TangentMode"][0]["_text"]
return_data["animKeys"].append(key_data)
return return_data
def _dictify(self, xml_, root=True):
""" Convert xml object to dictionary
Args:
xml_ (xml.etree.ElementTree.Element): xml data
root (bool, optional): is root available. Defaults to True.
Returns:
dict: dictionarized xml
"""
if root:
return {xml_.tag: self._dictify(xml_, False)}
d = copy(xml_.attrib)
if xml_.text:
d["_text"] = xml_.text
for x in xml_.findall("./*"):
if x.tag not in d:
d[x.tag] = []
d[x.tag].append(self._dictify(x, False))
return d

View file

@ -275,7 +275,7 @@ def create_otio_reference(clip_data, fps=None):
def create_otio_clip(clip_data):
from openpype.hosts.flame.api import MediaInfoFile
from openpype.hosts.flame.api import MediaInfoFile, TimeEffectMetadata
segment = clip_data["PySegment"]
@ -284,14 +284,31 @@ def create_otio_clip(clip_data):
media_timecode_start = media_info.start_frame
media_fps = media_info.fps
# Timewarp metadata
tw_data = TimeEffectMetadata(segment, logger=log).data
log.debug("__ tw_data: {}".format(tw_data))
# define first frame
first_frame = media_timecode_start or utils.get_frame_from_filename(
clip_data["fpath"]) or 0
file_first_frame = utils.get_frame_from_filename(
clip_data["fpath"])
if file_first_frame:
file_first_frame = int(file_first_frame)
first_frame = media_timecode_start or file_first_frame or 0
_clip_source_in = int(clip_data["source_in"])
_clip_source_out = int(clip_data["source_out"])
_clip_record_in = clip_data["record_in"]
_clip_record_out = clip_data["record_out"]
_clip_record_duration = int(clip_data["record_duration"])
log.debug("_ file_first_frame: {}".format(file_first_frame))
log.debug("_ first_frame: {}".format(first_frame))
log.debug("_ _clip_source_in: {}".format(_clip_source_in))
log.debug("_ _clip_source_out: {}".format(_clip_source_out))
log.debug("_ _clip_record_in: {}".format(_clip_record_in))
log.debug("_ _clip_record_out: {}".format(_clip_record_out))
# first solve if the reverse timing
speed = 1
if clip_data["source_in"] > clip_data["source_out"]:
@ -302,16 +319,28 @@ def create_otio_clip(clip_data):
source_in = _clip_source_in - int(first_frame)
source_out = _clip_source_out - int(first_frame)
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
if file_first_frame:
log.debug("_ file_source_in: {}".format(
file_first_frame + source_in))
log.debug("_ file_source_in: {}".format(
file_first_frame + source_out))
source_duration = (source_out - source_in + 1)
# secondly check if any change of speed
if source_duration != _clip_record_duration:
retime_speed = float(source_duration) / float(_clip_record_duration)
log.debug("_ retime_speed: {}".format(retime_speed))
log.debug("_ calculated speed: {}".format(retime_speed))
speed *= retime_speed
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
# get speed from metadata if available
if tw_data.get("speed"):
speed = tw_data["speed"]
log.debug("_ metadata speed: {}".format(speed))
log.debug("_ speed: {}".format(speed))
log.debug("_ source_duration: {}".format(source_duration))
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))

View file

@ -136,7 +136,8 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"tasks": {
task["name"]: {"type": task["type"]}
for task in self.add_tasks},
"representations": []
"representations": [],
"newAssetPublishing": True
})
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))

View file

@ -1,9 +1,9 @@
import pyblish.api
import openpype.lib as oplib
from openpype.pipeline import legacy_io
import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export
from openpype.pipeline import legacy_io
from openpype.pipeline.create import get_subset_name
class CollecTimelineOTIO(pyblish.api.ContextPlugin):
@ -24,11 +24,14 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
sequence = opfapi.get_current_sequence(opfapi.CTX.selection)
# create subset name
subset_name = oplib.get_subset_name_with_asset_doc(
subset_name = get_subset_name(
family,
variant,
task_name,
asset_doc,
context.data["projectName"],
context.data["hostName"],
project_settings=context.data["project_settings"]
)
# adding otio timeline to context

View file

@ -8,6 +8,9 @@ import pyblish.api
import openpype.api
from openpype.hosts.flame import api as opfapi
from openpype.hosts.flame.api import MediaInfoFile
from openpype.pipeline.editorial import (
get_media_range_with_retimes
)
import flame
@ -47,7 +50,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {}
def process(self, instance):
if not self.keep_original_representation:
# remove previeous representation if not needed
instance.data["representations"] = []
@ -67,18 +69,60 @@ class ExtractSubsetResources(openpype.api.Extractor):
# get media source first frame
source_first_frame = instance.data["sourceFirstFrame"]
self.log.debug("_ frame_start: {}".format(frame_start))
self.log.debug("_ source_first_frame: {}".format(source_first_frame))
# get timeline in/out of segment
clip_in = instance.data["clipIn"]
clip_out = instance.data["clipOut"]
# get retimed attributres
retimed_data = self._get_retimed_attributes(instance)
# get individual keys
r_handle_start = retimed_data["handle_start"]
r_handle_end = retimed_data["handle_end"]
r_source_dur = retimed_data["source_duration"]
r_speed = retimed_data["speed"]
# get handles value - take only the max from both
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles")
# get media source range with handles
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
# retime if needed
if r_speed != 1.0:
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle
if include_handles:
if r_speed == 1.0:
frame_start_handle = frame_start
else:
frame_start_handle = (
frame_start - handle_start) + r_handle_start
self.log.debug("_ frame_start_handle: {}".format(
frame_start_handle))
self.log.debug("_ repre_frame_start: {}".format(
repre_frame_start))
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles) + 1
# create staging dir path
staging_dir = self.staging_dir(instance)
@ -93,6 +137,28 @@ class ExtractSubsetResources(openpype.api.Extractor):
}
export_presets.update(self.export_presets_mapping)
if not instance.data.get("versionData"):
instance.data["versionData"] = {}
# set versiondata if any retime
version_data = retimed_data.get("version_data")
self.log.debug("_ version_data: {}".format(version_data))
if version_data:
instance.data["versionData"].update(version_data)
if r_speed != 1.0:
instance.data["versionData"].update({
"frameStart": frame_start_handle,
"frameEnd": (
(frame_start_handle + source_duration_handles - 1)
- (r_handle_start + r_handle_end)
)
})
self.log.debug("_ i_version_data: {}".format(
instance.data["versionData"]
))
# loop all preset names and
for unique_name, preset_config in export_presets.items():
modify_xml_data = {}
@ -115,20 +181,10 @@ class ExtractSubsetResources(openpype.api.Extractor):
)
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles)
# define in/out marks
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
exporting_clip = None
name_patern_xml = "<name>_{}.".format(
unique_name)
if export_type == "Sequence Publish":
# change export clip to sequence
exporting_clip = flame.duplicate(sequence_clip)
@ -142,19 +198,25 @@ class ExtractSubsetResources(openpype.api.Extractor):
"<segment name>_<shot name>_{}.").format(
unique_name)
# change in/out marks to timeline in/out
# only for h264 with baked retime
in_mark = clip_in
out_mark = clip_out
out_mark = clip_out + 1
modify_xml_data.update({
"exportHandles": True,
"nbHandles": handles
})
else:
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
exporting_clip = self.import_clip(clip_path)
exporting_clip.name.set_value("{}_{}".format(
asset_name, segment_name))
# add xml tags modifications
modify_xml_data.update({
"exportHandles": True,
"nbHandles": handles,
"startFrame": frame_start,
# enum position low start from 0
"frameIndex": 0,
"startFrame": repre_frame_start,
"namePattern": name_patern_xml
})
@ -162,6 +224,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add any xml overrides collected form segment.comment
modify_xml_data.update(instance.data["xml_overrides"])
self.log.debug("_ in_mark: {}".format(in_mark))
self.log.debug("_ out_mark: {}".format(out_mark))
export_kwargs = {}
# validate xml preset file is filled
if preset_file == "":
@ -196,9 +261,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
"namePattern": "__thumbnail"
})
thumb_frame_number = int(in_mark + (
source_duration_handles / 2))
(out_mark - in_mark + 1) / 2))
self.log.debug("__ in_mark: {}".format(in_mark))
self.log.debug("__ thumb_frame_number: {}".format(
thumb_frame_number
))
@ -210,9 +274,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
"out_mark": out_mark
})
self.log.debug("__ modify_xml_data: {}".format(
pformat(modify_xml_data)
))
preset_path = opfapi.modify_preset_file(
preset_orig_xml_path, staging_dir, modify_xml_data)
@ -281,9 +342,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add frame range
if preset_config["representation_add_range"]:
representation_data.update({
"frameStart": frame_start_handle,
"frameStart": repre_frame_start,
"frameEnd": (
frame_start_handle + source_duration_handles),
repre_frame_start + source_duration_handles) - 1,
"fps": instance.data["fps"]
})
@ -300,8 +361,32 @@ class ExtractSubsetResources(openpype.api.Extractor):
# at the end remove the duplicated clip
flame.delete(exporting_clip)
self.log.debug("All representations: {}".format(
pformat(instance.data["representations"])))
def _get_retimed_attributes(self, instance):
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
# get basic variables
otio_clip = instance.data["otioClip"]
# get available range trimmed with processed retimes
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
r_media_in = int(retimed_attributes["mediaIn"])
r_media_out = int(retimed_attributes["mediaOut"])
version_data = retimed_attributes.get("versionData")
return {
"version_data": version_data,
"handle_start": int(retimed_attributes["handleStart"]),
"handle_end": int(retimed_attributes["handleEnd"]),
"source_duration": (
(r_media_out - r_media_in) + 1
),
"speed": float(retimed_attributes["speed"])
}
def _should_skip(self, preset_config, clip_path, unique_name):
# get activating attributes
@ -313,8 +398,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
unique_name, activated_preset, filter_path_regex
)
)
self.log.debug(
"__ clip_path: `{}`".format(clip_path))
# skip if not activated presete
if not activated_preset:

View file

@ -3,9 +3,9 @@ import copy
from collections import OrderedDict
from pprint import pformat
import pyblish
from openpype.lib import get_workdir
import openpype.hosts.flame.api as opfapi
import openpype.pipeline as op_pipeline
from openpype.pipeline.workfile import get_workdir
class IntegrateBatchGroup(pyblish.api.InstancePlugin):
@ -324,7 +324,13 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
project_settings = instance.context.data["project_settings"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame", anatomy
project_doc,
asset_entity,
task_data["name"],
"flame",
anatomy,
project_settings=project_settings
)

View file

@ -8,7 +8,7 @@ import contextlib
import pyblish.api
from openpype.api import Logger
from openpype.lib import Logger
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
@ -20,7 +20,7 @@ from openpype.pipeline import (
)
import openpype.hosts.fusion
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")

View file

@ -3,9 +3,7 @@ import re
import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
@ -17,13 +15,10 @@ from openpype.pipeline import (
from openpype.lib import version_up
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import lib
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Update Slap Comp")
self = sys.modules[__name__]
self._project = None
def _format_version_folder(folder):
"""Format a version folder based on the filepath
@ -212,9 +207,6 @@ def switch(asset_name, filepath=None, new=True):
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = get_project(project_name)
# Go to comp
if not filepath:
current_comp = api.get_current_comp()

View file

@ -1,14 +1,12 @@
import os
import sys
from openpype.api import Logger
from openpype.lib import Logger
from openpype.pipeline import (
install_host,
registered_host,
)
log = Logger().get_logger(__name__)
def main(env):
from openpype.hosts.fusion import api
@ -17,6 +15,7 @@ def main(env):
# activate resolve from pype
install_host(api)
log = Logger.get_logger(__name__)
log.info(f"Registered host: {registered_host()}")
menu.launch_openpype_menu()

View file

@ -14,7 +14,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.hosts.fusion import api
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Fusion Switch Shot")

View file

@ -1,11 +1,10 @@
import os
from .addon import (
HARMONY_HOST_DIR,
HarmonyAddon,
)
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
openharmony_path = os.path.join(
os.environ["OPENPYPE_REPOS_ROOT"], "openpype", "hosts",
"harmony", "vendor", "OpenHarmony"
)
# TODO check if is already set? What to do if is already set?
env["LIB_OPENHARMONY_PATH"] = openharmony_path
__all__ = (
"HARMONY_HOST_DIR",
"HarmonyAddon",
)

View file

@ -0,0 +1,24 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
HARMONY_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class HarmonyAddon(OpenPypeModule, IHostAddon):
name = "harmony"
host_name = "harmony"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
openharmony_path = os.path.join(
HARMONY_HOST_DIR, "vendor", "OpenHarmony"
)
# TODO check if is already set? What to do if is already set?
env["LIB_OPENHARMONY_PATH"] = openharmony_path
def get_workfile_extensions(self):
return [".zip"]

View file

@ -4,25 +4,24 @@ import logging
import pyblish.api
from openpype import lib
from openpype.client import get_representation_by_id
from openpype.lib import register_event_callback
from openpype.pipeline import (
legacy_io,
register_loader_plugin_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
)
import openpype.hosts.harmony
from openpype.pipeline.load import get_outdated_containers
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.hosts.harmony import HARMONY_HOST_DIR
import openpype.hosts.harmony.api as harmony
log = logging.getLogger("openpype.hosts.harmony")
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.harmony.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PLUGINS_DIR = os.path.join(HARMONY_HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
@ -50,7 +49,9 @@ def get_asset_settings():
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset()
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
@ -105,16 +106,7 @@ def check_inventory():
in Harmony.
"""
project_name = legacy_io.active_project()
outdated_containers = []
for container in ls():
representation_id = container['representation']
representation_doc = get_representation_by_id(
project_name, representation_id, fields=["parent"]
)
if representation_doc and not lib.is_latest(representation_doc):
outdated_containers.append(container)
outdated_containers = get_outdated_containers()
if not outdated_containers:
return

View file

@ -2,8 +2,6 @@
import os
import shutil
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .lib import (
ProcessContext,
get_local_harmony_path,
@ -16,7 +14,7 @@ save_disabled = False
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["harmony"]
return [".zip"]
def has_unsaved_changes():

View file

@ -5,8 +5,8 @@ from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.pipeline.context_tools import is_representation_from_latest
import openpype.hosts.harmony.api as harmony
import openpype.lib
copy_files = """function copyFile(srcFilename, dstFilename)
@ -280,9 +280,7 @@ class BackgroundLoader(load.LoaderPlugin):
)
def update(self, container, representation):
path = get_representation_path(representation)
with open(path) as json_file:
data = json.load(json_file)
@ -300,10 +298,9 @@ class BackgroundLoader(load.LoaderPlugin):
bg_folder = os.path.dirname(path)
path = get_representation_path(representation)
print(container)
is_latest = is_representation_from_latest(representation)
for layer in sorted(layers):
file_to_import = [
os.path.join(bg_folder, layer).replace("\\", "/")
@ -347,7 +344,7 @@ class BackgroundLoader(load.LoaderPlugin):
}
%s
""" % (sig, sig)
if openpype.lib.is_latest(representation):
if is_latest:
harmony.send({"function": func, "args": [node, "green"]})
else:
harmony.send({"function": func, "args": [node, "red"]})

View file

@ -10,8 +10,8 @@ from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.pipeline.context_tools import is_representation_from_latest
import openpype.hosts.harmony.api as harmony
import openpype.lib
class ImageSequenceLoader(load.LoaderPlugin):
@ -109,7 +109,7 @@ class ImageSequenceLoader(load.LoaderPlugin):
)
# Colour node.
if openpype.lib.is_latest(representation):
if is_representation_from_latest(representation):
harmony.send(
{
"function": "PypeHarmony.setColor",

View file

@ -10,8 +10,8 @@ from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.pipeline.context_tools import is_representation_from_latest
import openpype.hosts.harmony.api as harmony
import openpype.lib
class TemplateLoader(load.LoaderPlugin):
@ -83,7 +83,7 @@ class TemplateLoader(load.LoaderPlugin):
self_name = self.__class__.__name__
update_and_replace = False
if openpype.lib.is_latest(representation):
if is_representation_from_latest(representation):
self._set_green(node)
else:
self._set_red(node)

View file

@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
"""Collect current workfile from Harmony."""
import pyblish.api
import os
import pyblish.api
from openpype.lib import get_subset_name_with_asset_doc
from openpype.pipeline.create import get_subset_name
class CollectWorkfile(pyblish.api.ContextPlugin):
@ -17,13 +17,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
"""Plugin entry point."""
family = "workfile"
basename = os.path.basename(context.data["currentFile"])
subset = get_subset_name_with_asset_doc(
subset = get_subset_name(
family,
"",
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],
host_name=context.data["hostName"]
host_name=context.data["hostName"],
project_settings=context.data["project_settings"]
)
# Create instance

View file

@ -55,6 +55,10 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
def process(self, instance):
"""Plugin entry point."""
# TODO 'get_asset_settings' could expect asset document as argument
# which is available on 'context.data["assetEntity"]'
# - the same approach can be used in 'ValidateSceneSettingsRepair'
expected_settings = harmony.get_asset_settings()
self.log.info("scene settings from DB:".format(expected_settings))

View file

@ -1,41 +1,10 @@
import os
import platform
from .addon import (
HIERO_ROOT_DIR,
HieroAddon,
)
def add_implementation_envs(env, _app):
# Add requirements to HIERO_PLUGIN_PATH
pype_root = os.environ["OPENPYPE_REPOS_ROOT"]
new_hiero_paths = [
os.path.join(pype_root, "openpype", "hosts", "hiero", "api", "startup")
]
old_hiero_path = env.get("HIERO_PLUGIN_PATH") or ""
for path in old_hiero_path.split(os.pathsep):
if not path:
continue
norm_path = os.path.normpath(path)
if norm_path not in new_hiero_paths:
new_hiero_paths.append(norm_path)
env["HIERO_PLUGIN_PATH"] = os.pathsep.join(new_hiero_paths)
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Try to add QuickTime to PATH
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"
if platform.system() == "windows" and os.path.exists(quick_time_path):
path_value = env.get("PATH") or ""
path_paths = [
path
for path in path_value.split(os.pathsep)
if path
]
path_paths.append(quick_time_path)
env["PATH"] = os.pathsep.join(path_paths)
# Set default values if are not already set via settings
defaults = {
"LOGLEVEL": "DEBUG"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
__all__ = (
"HIERO_ROOT_DIR",
"HieroAddon",
)

View file

@ -0,0 +1,63 @@
import os
import platform
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
HIERO_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class HieroAddon(OpenPypeModule, IHostAddon):
name = "hiero"
host_name = "hiero"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
# Add requirements to HIERO_PLUGIN_PATH
new_hiero_paths = [
os.path.join(HIERO_ROOT_DIR, "api", "startup")
]
old_hiero_path = env.get("HIERO_PLUGIN_PATH") or ""
for path in old_hiero_path.split(os.pathsep):
if not path:
continue
norm_path = os.path.normpath(path)
if norm_path not in new_hiero_paths:
new_hiero_paths.append(norm_path)
env["HIERO_PLUGIN_PATH"] = os.pathsep.join(new_hiero_paths)
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Add vendor to PYTHONPATH
python_path = env["PYTHONPATH"]
python_path_parts = []
if python_path:
python_path_parts = python_path.split(os.pathsep)
vendor_path = os.path.join(HIERO_ROOT_DIR, "vendor")
python_path_parts.insert(0, vendor_path)
env["PYTHONPATH"] = os.pathsep.join(python_path_parts)
# Set default values if are not already set via settings
defaults = {
"LOGLEVEL": "DEBUG"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
# Try to add QuickTime to PATH
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"
if platform.system() == "windows" and os.path.exists(quick_time_path):
path_value = env.get("PATH") or ""
path_paths = [
path
for path in path_value.split(os.pathsep)
if path
]
path_paths.append(quick_time_path)
env["PATH"] = os.pathsep.join(path_paths)
def get_workfile_extensions(self):
return [".hrox"]

View file

@ -1,7 +1,6 @@
import os
import hiero.core.events
from openpype.api import Logger
from openpype.lib import register_event_callback
from openpype.lib import Logger, register_event_callback
from .lib import (
sync_avalon_data_to_workfile,
launch_workfiles_app,
@ -11,7 +10,7 @@ from .lib import (
from .tags import add_tags_to_workfile
from .menu import update_menu_task_label
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
def startupCompleted(event):

View file

@ -21,7 +21,7 @@ from openpype.client import (
)
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from openpype.lib import Logger
from . import tags
try:
@ -34,7 +34,7 @@ except ImportError:
# from opentimelineio import opentime
# from pprint import pformat
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
self = sys.modules[__name__]
self._has_been_setup = False

View file

@ -6,7 +6,7 @@ import contextlib
from collections import OrderedDict
from pyblish import api as pyblish
from openpype.api import Logger
from openpype.lib import Logger
from openpype.pipeline import (
schema,
register_creator_plugin_path,
@ -18,7 +18,7 @@ from openpype.pipeline import (
from openpype.tools.utils import host_tools
from . import lib, menu, events
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
# plugin paths
API_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -9,10 +9,12 @@ from Qt import QtWidgets, QtCore
import qargparse
import openpype.api as openpype
from openpype.lib import Logger
from openpype.pipeline import LoaderPlugin, LegacyCreator
from openpype.pipeline.context_tools import get_current_project_asset
from . import lib
log = openpype.Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
def load_stylesheet():
@ -484,7 +486,7 @@ class ClipLoader:
"""
asset_name = self.context["representation"]["context"]["asset"]
asset_doc = openpype.get_asset(asset_name)
asset_doc = get_current_project_asset(asset_name)
log.debug("__ asset_doc: {}".format(pformat(asset_doc)))
self.data["assetData"] = asset_doc["data"]

View file

@ -2,13 +2,12 @@ import os
import hiero
from openpype.api import Logger
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
log = Logger.get_logger(__name__)
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["hiero"]
return [".hrox"]
def has_unsaved_changes():

View file

@ -109,7 +109,8 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
"clipAnnotations": annotations,
# add all additional tags
"tags": phiero.get_track_item_tags(track_item)
"tags": phiero.get_track_item_tags(track_item),
"newAssetPublishing": True
})
# otio clip data

View file

@ -0,0 +1,33 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Copyright 2007 Google Inc. All Rights Reserved.
__version__ = '3.20.1'

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/any.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"&\n\x03\x41ny\x12\x10\n\x08type_url\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c\x42v\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_ANY._serialized_start=46
_ANY._serialized_end=84
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/api.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2
from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\x81\x02\n\x03\x41pi\x12\x0c\n\x04name\x18\x01 \x01(\t\x12(\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Method\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x0f\n\x07version\x18\x04 \x01(\t\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12&\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.Mixin\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x01\n\x06Method\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x18\n\x10request_type_url\x18\x02 \x01(\t\x12\x19\n\x11request_streaming\x18\x03 \x01(\x08\x12\x19\n\x11response_type_url\x18\x04 \x01(\t\x12\x1a\n\x12response_streaming\x18\x05 \x01(\x08\x12(\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.Option\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"#\n\x05Mixin\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04root\x18\x02 \x01(\tBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.api_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_API._serialized_start=113
_API._serialized_end=370
_METHOD._serialized_start=373
_METHOD._serialized_end=586
_MIXIN._serialized_start=588
_MIXIN._serialized_end=623
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,35 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/compiler/plugin.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"F\n\x07Version\x12\r\n\x05major\x18\x01 \x01(\x05\x12\r\n\x05minor\x18\x02 \x01(\x05\x12\r\n\x05patch\x18\x03 \x01(\x05\x12\x0e\n\x06suffix\x18\x04 \x01(\t\"\xba\x01\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\x12;\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.Version\"\xc1\x02\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x1a\n\x12supported_features\x18\x02 \x01(\x04\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a\x7f\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\t\x12?\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfo\"8\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x42W\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb'
_VERSION._serialized_start=101
_VERSION._serialized_end=171
_CODEGENERATORREQUEST._serialized_start=174
_CODEGENERATORREQUEST._serialized_end=360
_CODEGENERATORRESPONSE._serialized_start=363
_CODEGENERATORRESPONSE._serialized_end=684
_CODEGENERATORRESPONSE_FILE._serialized_start=499
_CODEGENERATORRESPONSE_FILE._serialized_end=626
_CODEGENERATORRESPONSE_FEATURE._serialized_start=628
_CODEGENERATORRESPONSE_FEATURE._serialized_end=684
# @@protoc_insertion_point(module_scope)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,177 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides a container for DescriptorProtos."""
__author__ = 'matthewtoia@google.com (Matt Toia)'
import warnings
class Error(Exception):
pass
class DescriptorDatabaseConflictingDefinitionError(Error):
"""Raised when a proto is added with the same name & different descriptor."""
class DescriptorDatabase(object):
"""A container accepting FileDescriptorProtos and maps DescriptorProtos."""
def __init__(self):
self._file_desc_protos_by_file = {}
self._file_desc_protos_by_symbol = {}
def Add(self, file_desc_proto):
"""Adds the FileDescriptorProto and its types to this database.
Args:
file_desc_proto: The FileDescriptorProto to add.
Raises:
DescriptorDatabaseConflictingDefinitionError: if an attempt is made to
add a proto with the same name but different definition than an
existing proto in the database.
"""
proto_name = file_desc_proto.name
if proto_name not in self._file_desc_protos_by_file:
self._file_desc_protos_by_file[proto_name] = file_desc_proto
elif self._file_desc_protos_by_file[proto_name] != file_desc_proto:
raise DescriptorDatabaseConflictingDefinitionError(
'%s already added, but with different descriptor.' % proto_name)
else:
return
# Add all the top-level descriptors to the index.
package = file_desc_proto.package
for message in file_desc_proto.message_type:
for name in _ExtractSymbols(message, package):
self._AddSymbol(name, file_desc_proto)
for enum in file_desc_proto.enum_type:
self._AddSymbol(('.'.join((package, enum.name))), file_desc_proto)
for enum_value in enum.value:
self._file_desc_protos_by_symbol[
'.'.join((package, enum_value.name))] = file_desc_proto
for extension in file_desc_proto.extension:
self._AddSymbol(('.'.join((package, extension.name))), file_desc_proto)
for service in file_desc_proto.service:
self._AddSymbol(('.'.join((package, service.name))), file_desc_proto)
def FindFileByName(self, name):
"""Finds the file descriptor proto by file name.
Typically the file name is a relative path ending to a .proto file. The
proto with the given name will have to have been added to this database
using the Add method or else an error will be raised.
Args:
name: The file name to find.
Returns:
The file descriptor proto matching the name.
Raises:
KeyError if no file by the given name was added.
"""
return self._file_desc_protos_by_file[name]
def FindFileContainingSymbol(self, symbol):
"""Finds the file descriptor proto containing the specified symbol.
The symbol should be a fully qualified name including the file descriptor's
package and any containing messages. Some examples:
'some.package.name.Message'
'some.package.name.Message.NestedEnum'
'some.package.name.Message.some_field'
The file descriptor proto containing the specified symbol must be added to
this database using the Add method or else an error will be raised.
Args:
symbol: The fully qualified symbol name.
Returns:
The file descriptor proto containing the symbol.
Raises:
KeyError if no file contains the specified symbol.
"""
try:
return self._file_desc_protos_by_symbol[symbol]
except KeyError:
# Fields, enum values, and nested extensions are not in
# _file_desc_protos_by_symbol. Try to find the top level
# descriptor. Non-existent nested symbol under a valid top level
# descriptor can also be found. The behavior is the same with
# protobuf C++.
top_level, _, _ = symbol.rpartition('.')
try:
return self._file_desc_protos_by_symbol[top_level]
except KeyError:
# Raise the original symbol as a KeyError for better diagnostics.
raise KeyError(symbol)
def FindFileContainingExtension(self, extendee_name, extension_number):
# TODO(jieluo): implement this API.
return None
def FindAllExtensionNumbers(self, extendee_name):
# TODO(jieluo): implement this API.
return []
def _AddSymbol(self, name, file_desc_proto):
if name in self._file_desc_protos_by_symbol:
warn_msg = ('Conflict register for file "' + file_desc_proto.name +
'": ' + name +
' is already defined in file "' +
self._file_desc_protos_by_symbol[name].name + '"')
warnings.warn(warn_msg, RuntimeWarning)
self._file_desc_protos_by_symbol[name] = file_desc_proto
def _ExtractSymbols(desc_proto, package):
"""Pulls out all the symbols from a descriptor proto.
Args:
desc_proto: The proto to extract symbols from.
package: The package containing the descriptor type.
Yields:
The fully qualified name found in the descriptor.
"""
message_name = package + '.' + desc_proto.name if package else desc_proto.name
yield message_name
for nested_type in desc_proto.nested_type:
for symbol in _ExtractSymbols(nested_type, message_name):
yield symbol
for enum_type in desc_proto.enum_type:
yield '.'.join((message_name, enum_type.name))

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/duration.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\"*\n\x08\x44uration\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.duration_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_DURATION._serialized_start=51
_DURATION._serialized_end=93
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/empty.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_EMPTY._serialized_start=48
_EMPTY._serialized_end=55
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/field_mask.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"\x1a\n\tFieldMask\x12\r\n\x05paths\x18\x01 \x03(\tB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.field_mask_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_FIELDMASK._serialized_start=53
_FIELDMASK._serialized_end=79
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,443 @@
#! /usr/bin/env python
#
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Adds support for parameterized tests to Python's unittest TestCase class.
A parameterized test is a method in a test case that is invoked with different
argument tuples.
A simple example:
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
(1, 2, 3),
(4, 5, 9),
(1, 1, 3))
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
Each invocation is a separate test case and properly isolated just
like a normal test method, with its own setUp/tearDown cycle. In the
example above, there are three separate testcases, one of which will
fail due to an assertion error (1 + 1 != 3).
Parameters for individual test cases can be tuples (with positional parameters)
or dictionaries (with named parameters):
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
{'op1': 1, 'op2': 2, 'result': 3},
{'op1': 4, 'op2': 5, 'result': 9},
)
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
If a parameterized test fails, the error message will show the
original test name (which is modified internally) and the arguments
for the specific invocation, which are part of the string returned by
the shortDescription() method on test cases.
The id method of the test, used internally by the unittest framework,
is also modified to show the arguments. To make sure that test names
stay the same across several invocations, object representations like
>>> class Foo(object):
... pass
>>> repr(Foo())
'<__main__.Foo object at 0x23d8610>'
are turned into '<__main__.Foo>'. For even more descriptive names,
especially in test logs, you can use the named_parameters decorator. In
this case, only tuples are supported, and the first parameters has to
be a string (or an object that returns an apt name when converted via
str()):
class NamedExample(parameterized.TestCase):
@parameterized.named_parameters(
('Normal', 'aa', 'aaa', True),
('EmptyPrefix', '', 'abc', True),
('BothEmpty', '', '', True))
def testStartsWith(self, prefix, string, result):
self.assertEqual(result, strings.startswith(prefix))
Named tests also have the benefit that they can be run individually
from the command line:
$ testmodule.py NamedExample.testStartsWithNormal
.
--------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Parameterized Classes
=====================
If invocation arguments are shared across test methods in a single
TestCase class, instead of decorating all test methods
individually, the class itself can be decorated:
@parameterized.parameters(
(1, 2, 3)
(4, 5, 9))
class ArithmeticTest(parameterized.TestCase):
def testAdd(self, arg1, arg2, result):
self.assertEqual(arg1 + arg2, result)
def testSubtract(self, arg2, arg2, result):
self.assertEqual(result - arg1, arg2)
Inputs from Iterables
=====================
If parameters should be shared across several test cases, or are dynamically
created from other sources, a single non-tuple iterable can be passed into
the decorator. This iterable will be used to obtain the test cases:
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
c.op1, c.op2, c.result for c in testcases
)
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
Single-Argument Test Methods
============================
If a test method takes only one argument, the single argument does not need to
be wrapped into a tuple:
class NegativeNumberExample(parameterized.TestCase):
@parameterized.parameters(
-1, -3, -4, -5
)
def testIsNegative(self, arg):
self.assertTrue(IsNegative(arg))
"""
__author__ = 'tmarek@google.com (Torsten Marek)'
import functools
import re
import types
import unittest
import uuid
try:
# Since python 3
import collections.abc as collections_abc
except ImportError:
# Won't work after python 3.8
import collections as collections_abc
ADDR_RE = re.compile(r'\<([a-zA-Z0-9_\-\.]+) object at 0x[a-fA-F0-9]+\>')
_SEPARATOR = uuid.uuid1().hex
_FIRST_ARG = object()
_ARGUMENT_REPR = object()
def _CleanRepr(obj):
return ADDR_RE.sub(r'<\1>', repr(obj))
# Helper function formerly from the unittest module, removed from it in
# Python 2.7.
def _StrClass(cls):
return '%s.%s' % (cls.__module__, cls.__name__)
def _NonStringIterable(obj):
return (isinstance(obj, collections_abc.Iterable) and
not isinstance(obj, str))
def _FormatParameterList(testcase_params):
if isinstance(testcase_params, collections_abc.Mapping):
return ', '.join('%s=%s' % (argname, _CleanRepr(value))
for argname, value in testcase_params.items())
elif _NonStringIterable(testcase_params):
return ', '.join(map(_CleanRepr, testcase_params))
else:
return _FormatParameterList((testcase_params,))
class _ParameterizedTestIter(object):
"""Callable and iterable class for producing new test cases."""
def __init__(self, test_method, testcases, naming_type):
"""Returns concrete test functions for a test and a list of parameters.
The naming_type is used to determine the name of the concrete
functions as reported by the unittest framework. If naming_type is
_FIRST_ARG, the testcases must be tuples, and the first element must
have a string representation that is a valid Python identifier.
Args:
test_method: The decorated test method.
testcases: (list of tuple/dict) A list of parameter
tuples/dicts for individual test invocations.
naming_type: The test naming type, either _NAMED or _ARGUMENT_REPR.
"""
self._test_method = test_method
self.testcases = testcases
self._naming_type = naming_type
def __call__(self, *args, **kwargs):
raise RuntimeError('You appear to be running a parameterized test case '
'without having inherited from parameterized.'
'TestCase. This is bad because none of '
'your test cases are actually being run.')
def __iter__(self):
test_method = self._test_method
naming_type = self._naming_type
def MakeBoundParamTest(testcase_params):
@functools.wraps(test_method)
def BoundParamTest(self):
if isinstance(testcase_params, collections_abc.Mapping):
test_method(self, **testcase_params)
elif _NonStringIterable(testcase_params):
test_method(self, *testcase_params)
else:
test_method(self, testcase_params)
if naming_type is _FIRST_ARG:
# Signal the metaclass that the name of the test function is unique
# and descriptive.
BoundParamTest.__x_use_name__ = True
BoundParamTest.__name__ += str(testcase_params[0])
testcase_params = testcase_params[1:]
elif naming_type is _ARGUMENT_REPR:
# __x_extra_id__ is used to pass naming information to the __new__
# method of TestGeneratorMetaclass.
# The metaclass will make sure to create a unique, but nondescriptive
# name for this test.
BoundParamTest.__x_extra_id__ = '(%s)' % (
_FormatParameterList(testcase_params),)
else:
raise RuntimeError('%s is not a valid naming type.' % (naming_type,))
BoundParamTest.__doc__ = '%s(%s)' % (
BoundParamTest.__name__, _FormatParameterList(testcase_params))
if test_method.__doc__:
BoundParamTest.__doc__ += '\n%s' % (test_method.__doc__,)
return BoundParamTest
return (MakeBoundParamTest(c) for c in self.testcases)
def _IsSingletonList(testcases):
"""True iff testcases contains only a single non-tuple element."""
return len(testcases) == 1 and not isinstance(testcases[0], tuple)
def _ModifyClass(class_object, testcases, naming_type):
assert not getattr(class_object, '_id_suffix', None), (
'Cannot add parameters to %s,'
' which already has parameterized methods.' % (class_object,))
class_object._id_suffix = id_suffix = {}
# We change the size of __dict__ while we iterate over it,
# which Python 3.x will complain about, so use copy().
for name, obj in class_object.__dict__.copy().items():
if (name.startswith(unittest.TestLoader.testMethodPrefix)
and isinstance(obj, types.FunctionType)):
delattr(class_object, name)
methods = {}
_UpdateClassDictForParamTestCase(
methods, id_suffix, name,
_ParameterizedTestIter(obj, testcases, naming_type))
for name, meth in methods.items():
setattr(class_object, name, meth)
def _ParameterDecorator(naming_type, testcases):
"""Implementation of the parameterization decorators.
Args:
naming_type: The naming type.
testcases: Testcase parameters.
Returns:
A function for modifying the decorated object.
"""
def _Apply(obj):
if isinstance(obj, type):
_ModifyClass(
obj,
list(testcases) if not isinstance(testcases, collections_abc.Sequence)
else testcases,
naming_type)
return obj
else:
return _ParameterizedTestIter(obj, testcases, naming_type)
if _IsSingletonList(testcases):
assert _NonStringIterable(testcases[0]), (
'Single parameter argument must be a non-string iterable')
testcases = testcases[0]
return _Apply
def parameters(*testcases): # pylint: disable=invalid-name
"""A decorator for creating parameterized tests.
See the module docstring for a usage example.
Args:
*testcases: Parameters for the decorated method, either a single
iterable, or a list of tuples/dicts/objects (for tests
with only one argument).
Returns:
A test generator to be handled by TestGeneratorMetaclass.
"""
return _ParameterDecorator(_ARGUMENT_REPR, testcases)
def named_parameters(*testcases): # pylint: disable=invalid-name
"""A decorator for creating parameterized tests.
See the module docstring for a usage example. The first element of
each parameter tuple should be a string and will be appended to the
name of the test method.
Args:
*testcases: Parameters for the decorated method, either a single
iterable, or a list of tuples.
Returns:
A test generator to be handled by TestGeneratorMetaclass.
"""
return _ParameterDecorator(_FIRST_ARG, testcases)
class TestGeneratorMetaclass(type):
"""Metaclass for test cases with test generators.
A test generator is an iterable in a testcase that produces callables. These
callables must be single-argument methods. These methods are injected into
the class namespace and the original iterable is removed. If the name of the
iterable conforms to the test pattern, the injected methods will be picked
up as tests by the unittest framework.
In general, it is supposed to be used in conjunction with the
parameters decorator.
"""
def __new__(mcs, class_name, bases, dct):
dct['_id_suffix'] = id_suffix = {}
for name, obj in dct.copy().items():
if (name.startswith(unittest.TestLoader.testMethodPrefix) and
_NonStringIterable(obj)):
iterator = iter(obj)
dct.pop(name)
_UpdateClassDictForParamTestCase(dct, id_suffix, name, iterator)
return type.__new__(mcs, class_name, bases, dct)
def _UpdateClassDictForParamTestCase(dct, id_suffix, name, iterator):
"""Adds individual test cases to a dictionary.
Args:
dct: The target dictionary.
id_suffix: The dictionary for mapping names to test IDs.
name: The original name of the test case.
iterator: The iterator generating the individual test cases.
"""
for idx, func in enumerate(iterator):
assert callable(func), 'Test generators must yield callables, got %r' % (
func,)
if getattr(func, '__x_use_name__', False):
new_name = func.__name__
else:
new_name = '%s%s%d' % (name, _SEPARATOR, idx)
assert new_name not in dct, (
'Name of parameterized test case "%s" not unique' % (new_name,))
dct[new_name] = func
id_suffix[new_name] = getattr(func, '__x_extra_id__', '')
class TestCase(unittest.TestCase, metaclass=TestGeneratorMetaclass):
"""Base class for test cases using the parameters decorator."""
def _OriginalName(self):
return self._testMethodName.split(_SEPARATOR)[0]
def __str__(self):
return '%s (%s)' % (self._OriginalName(), _StrClass(self.__class__))
def id(self): # pylint: disable=invalid-name
"""Returns the descriptive ID of the test.
This is used internally by the unittesting framework to get a name
for the test to be used in reports.
Returns:
The test id.
"""
return '%s.%s%s' % (_StrClass(self.__class__),
self._OriginalName(),
self._id_suffix.get(self._testMethodName, ''))
def CoopTestCase(other_base_class):
"""Returns a new base class with a cooperative metaclass base.
This enables the TestCase to be used in combination
with other base classes that have custom metaclasses, such as
mox.MoxTestBase.
Only works with metaclasses that do not override type.__new__.
Example:
import google3
import mox
from google3.testing.pybase import parameterized
class ExampleTest(parameterized.CoopTestCase(mox.MoxTestBase)):
...
Args:
other_base_class: (class) A test case base class.
Returns:
A new class object.
"""
metaclass = type(
'CoopMetaclass',
(other_base_class.__metaclass__,
TestGeneratorMetaclass), {})
return metaclass(
'CoopTestCase',
(other_base_class, TestCase), {})

View file

@ -0,0 +1,112 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Determine which implementation of the protobuf API is used in this process.
"""
import os
import sys
import warnings
try:
# pylint: disable=g-import-not-at-top
from google.protobuf.internal import _api_implementation
# The compile-time constants in the _api_implementation module can be used to
# switch to a certain implementation of the Python API at build time.
_api_version = _api_implementation.api_version
except ImportError:
_api_version = -1 # Unspecified by compiler flags.
if _api_version == 1:
raise ValueError('api_version=1 is no longer supported.')
_default_implementation_type = ('cpp' if _api_version > 0 else 'python')
# This environment variable can be used to switch to a certain implementation
# of the Python API, overriding the compile-time constants in the
# _api_implementation module. Right now only 'python' and 'cpp' are valid
# values. Any other value will be ignored.
_implementation_type = os.getenv('PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION',
_default_implementation_type)
if _implementation_type != 'python':
_implementation_type = 'cpp'
if 'PyPy' in sys.version and _implementation_type == 'cpp':
warnings.warn('PyPy does not work yet with cpp protocol buffers. '
'Falling back to the python implementation.')
_implementation_type = 'python'
# Detect if serialization should be deterministic by default
try:
# The presence of this module in a build allows the proto implementation to
# be upgraded merely via build deps.
#
# NOTE: Merely importing this automatically enables deterministic proto
# serialization for C++ code, but we still need to export it as a boolean so
# that we can do the same for `_implementation_type == 'python'`.
#
# NOTE2: It is possible for C++ code to enable deterministic serialization by
# default _without_ affecting Python code, if the C++ implementation is not in
# use by this module. That is intended behavior, so we don't actually expose
# this boolean outside of this module.
#
# pylint: disable=g-import-not-at-top,unused-import
from google.protobuf import enable_deterministic_proto_serialization
_python_deterministic_proto_serialization = True
except ImportError:
_python_deterministic_proto_serialization = False
# Usage of this function is discouraged. Clients shouldn't care which
# implementation of the API is in use. Note that there is no guarantee
# that differences between APIs will be maintained.
# Please don't use this function if possible.
def Type():
return _implementation_type
def _SetType(implementation_type):
"""Never use! Only for protobuf benchmark."""
global _implementation_type
_implementation_type = implementation_type
# See comment on 'Type' above.
def Version():
return 2
# For internal use only
def IsPythonDefaultSerializationDeterministic():
return _python_deterministic_proto_serialization

View file

@ -0,0 +1,130 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Builds descriptors, message classes and services for generated _pb2.py.
This file is only called in python generated _pb2.py files. It builds
descriptors, message classes and services that users can directly use
in generated code.
"""
__author__ = 'jieluo@google.com (Jie Luo)'
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
_sym_db = _symbol_database.Default()
def BuildMessageAndEnumDescriptors(file_des, module):
"""Builds message and enum descriptors.
Args:
file_des: FileDescriptor of the .proto file
module: Generated _pb2 module
"""
def BuildNestedDescriptors(msg_des, prefix):
for (name, nested_msg) in msg_des.nested_types_by_name.items():
module_name = prefix + name.upper()
module[module_name] = nested_msg
BuildNestedDescriptors(nested_msg, module_name + '_')
for enum_des in msg_des.enum_types:
module[prefix + enum_des.name.upper()] = enum_des
for (name, msg_des) in file_des.message_types_by_name.items():
module_name = '_' + name.upper()
module[module_name] = msg_des
BuildNestedDescriptors(msg_des, module_name + '_')
def BuildTopDescriptorsAndMessages(file_des, module_name, module):
"""Builds top level descriptors and message classes.
Args:
file_des: FileDescriptor of the .proto file
module_name: str, the name of generated _pb2 module
module: Generated _pb2 module
"""
def BuildMessage(msg_des):
create_dict = {}
for (name, nested_msg) in msg_des.nested_types_by_name.items():
create_dict[name] = BuildMessage(nested_msg)
create_dict['DESCRIPTOR'] = msg_des
create_dict['__module__'] = module_name
message_class = _reflection.GeneratedProtocolMessageType(
msg_des.name, (_message.Message,), create_dict)
_sym_db.RegisterMessage(message_class)
return message_class
# top level enums
for (name, enum_des) in file_des.enum_types_by_name.items():
module['_' + name.upper()] = enum_des
module[name] = enum_type_wrapper.EnumTypeWrapper(enum_des)
for enum_value in enum_des.values:
module[enum_value.name] = enum_value.number
# top level extensions
for (name, extension_des) in file_des.extensions_by_name.items():
module[name.upper() + '_FIELD_NUMBER'] = extension_des.number
module[name] = extension_des
# services
for (name, service) in file_des.services_by_name.items():
module['_' + name.upper()] = service
# Build messages.
for (name, msg_des) in file_des.message_types_by_name.items():
module[name] = BuildMessage(msg_des)
def BuildServices(file_des, module_name, module):
"""Builds services classes and services stub class.
Args:
file_des: FileDescriptor of the .proto file
module_name: str, the name of generated _pb2 module
module: Generated _pb2 module
"""
# pylint: disable=g-import-not-at-top
from google.protobuf import service as _service
from google.protobuf import service_reflection
# pylint: enable=g-import-not-at-top
for (name, service) in file_des.services_by_name.items():
module[name] = service_reflection.GeneratedServiceType(
name, (_service.Service,),
dict(DESCRIPTOR=service, __module__=module_name))
stub_name = name + '_Stub'
module[stub_name] = service_reflection.GeneratedServiceStubType(
stub_name, (module[name],),
dict(DESCRIPTOR=service, __module__=module_name))

View file

@ -0,0 +1,710 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains container classes to represent different protocol buffer types.
This file defines container classes which represent categories of protocol
buffer field types which need extra maintenance. Currently these categories
are:
- Repeated scalar fields - These are all repeated fields which aren't
composite (e.g. they are of simple types like int32, string, etc).
- Repeated composite fields - Repeated fields which are composite. This
includes groups and nested messages.
"""
import collections.abc
import copy
import pickle
from typing import (
Any,
Iterable,
Iterator,
List,
MutableMapping,
MutableSequence,
NoReturn,
Optional,
Sequence,
TypeVar,
Union,
overload,
)
_T = TypeVar('_T')
_K = TypeVar('_K')
_V = TypeVar('_V')
class BaseContainer(Sequence[_T]):
"""Base container class."""
# Minimizes memory usage and disallows assignment to other attributes.
__slots__ = ['_message_listener', '_values']
def __init__(self, message_listener: Any) -> None:
"""
Args:
message_listener: A MessageListener implementation.
The RepeatedScalarFieldContainer will call this object's
Modified() method when it is modified.
"""
self._message_listener = message_listener
self._values = []
@overload
def __getitem__(self, key: int) -> _T:
...
@overload
def __getitem__(self, key: slice) -> List[_T]:
...
def __getitem__(self, key):
"""Retrieves item by the specified key."""
return self._values[key]
def __len__(self) -> int:
"""Returns the number of elements in the container."""
return len(self._values)
def __ne__(self, other: Any) -> bool:
"""Checks if another instance isn't equal to this one."""
# The concrete classes should define __eq__.
return not self == other
__hash__ = None
def __repr__(self) -> str:
return repr(self._values)
def sort(self, *args, **kwargs) -> None:
# Continue to support the old sort_function keyword argument.
# This is expected to be a rare occurrence, so use LBYL to avoid
# the overhead of actually catching KeyError.
if 'sort_function' in kwargs:
kwargs['cmp'] = kwargs.pop('sort_function')
self._values.sort(*args, **kwargs)
def reverse(self) -> None:
self._values.reverse()
# TODO(slebedev): Remove this. BaseContainer does *not* conform to
# MutableSequence, only its subclasses do.
collections.abc.MutableSequence.register(BaseContainer)
class RepeatedScalarFieldContainer(BaseContainer[_T], MutableSequence[_T]):
"""Simple, type-checked, list-like container for holding repeated scalars."""
# Disallows assignment to other attributes.
__slots__ = ['_type_checker']
def __init__(
self,
message_listener: Any,
type_checker: Any,
) -> None:
"""Args:
message_listener: A MessageListener implementation. The
RepeatedScalarFieldContainer will call this object's Modified() method
when it is modified.
type_checker: A type_checkers.ValueChecker instance to run on elements
inserted into this container.
"""
super().__init__(message_listener)
self._type_checker = type_checker
def append(self, value: _T) -> None:
"""Appends an item to the list. Similar to list.append()."""
self._values.append(self._type_checker.CheckValue(value))
if not self._message_listener.dirty:
self._message_listener.Modified()
def insert(self, key: int, value: _T) -> None:
"""Inserts the item at the specified position. Similar to list.insert()."""
self._values.insert(key, self._type_checker.CheckValue(value))
if not self._message_listener.dirty:
self._message_listener.Modified()
def extend(self, elem_seq: Iterable[_T]) -> None:
"""Extends by appending the given iterable. Similar to list.extend()."""
if elem_seq is None:
return
try:
elem_seq_iter = iter(elem_seq)
except TypeError:
if not elem_seq:
# silently ignore falsy inputs :-/.
# TODO(ptucker): Deprecate this behavior. b/18413862
return
raise
new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
if new_values:
self._values.extend(new_values)
self._message_listener.Modified()
def MergeFrom(
self,
other: Union['RepeatedScalarFieldContainer[_T]', Iterable[_T]],
) -> None:
"""Appends the contents of another repeated field of the same type to this
one. We do not check the types of the individual fields.
"""
self._values.extend(other)
self._message_listener.Modified()
def remove(self, elem: _T):
"""Removes an item from the list. Similar to list.remove()."""
self._values.remove(elem)
self._message_listener.Modified()
def pop(self, key: Optional[int] = -1) -> _T:
"""Removes and returns an item at a given index. Similar to list.pop()."""
value = self._values[key]
self.__delitem__(key)
return value
@overload
def __setitem__(self, key: int, value: _T) -> None:
...
@overload
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
...
def __setitem__(self, key, value) -> None:
"""Sets the item on the specified position."""
if isinstance(key, slice):
if key.step is not None:
raise ValueError('Extended slices not supported')
self._values[key] = map(self._type_checker.CheckValue, value)
self._message_listener.Modified()
else:
self._values[key] = self._type_checker.CheckValue(value)
self._message_listener.Modified()
def __delitem__(self, key: Union[int, slice]) -> None:
"""Deletes the item at the specified position."""
del self._values[key]
self._message_listener.Modified()
def __eq__(self, other: Any) -> bool:
"""Compares the current instance with another one."""
if self is other:
return True
# Special case for the same type which should be common and fast.
if isinstance(other, self.__class__):
return other._values == self._values
# We are presumably comparing against some other sequence type.
return other == self._values
def __deepcopy__(
self,
unused_memo: Any = None,
) -> 'RepeatedScalarFieldContainer[_T]':
clone = RepeatedScalarFieldContainer(
copy.deepcopy(self._message_listener), self._type_checker)
clone.MergeFrom(self)
return clone
def __reduce__(self, **kwargs) -> NoReturn:
raise pickle.PickleError(
"Can't pickle repeated scalar fields, convert to list first")
# TODO(slebedev): Constrain T to be a subtype of Message.
class RepeatedCompositeFieldContainer(BaseContainer[_T], MutableSequence[_T]):
"""Simple, list-like container for holding repeated composite fields."""
# Disallows assignment to other attributes.
__slots__ = ['_message_descriptor']
def __init__(self, message_listener: Any, message_descriptor: Any) -> None:
"""
Note that we pass in a descriptor instead of the generated directly,
since at the time we construct a _RepeatedCompositeFieldContainer we
haven't yet necessarily initialized the type that will be contained in the
container.
Args:
message_listener: A MessageListener implementation.
The RepeatedCompositeFieldContainer will call this object's
Modified() method when it is modified.
message_descriptor: A Descriptor instance describing the protocol type
that should be present in this container. We'll use the
_concrete_class field of this descriptor when the client calls add().
"""
super().__init__(message_listener)
self._message_descriptor = message_descriptor
def add(self, **kwargs: Any) -> _T:
"""Adds a new element at the end of the list and returns it. Keyword
arguments may be used to initialize the element.
"""
new_element = self._message_descriptor._concrete_class(**kwargs)
new_element._SetListener(self._message_listener)
self._values.append(new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
return new_element
def append(self, value: _T) -> None:
"""Appends one element by copying the message."""
new_element = self._message_descriptor._concrete_class()
new_element._SetListener(self._message_listener)
new_element.CopyFrom(value)
self._values.append(new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
def insert(self, key: int, value: _T) -> None:
"""Inserts the item at the specified position by copying."""
new_element = self._message_descriptor._concrete_class()
new_element._SetListener(self._message_listener)
new_element.CopyFrom(value)
self._values.insert(key, new_element)
if not self._message_listener.dirty:
self._message_listener.Modified()
def extend(self, elem_seq: Iterable[_T]) -> None:
"""Extends by appending the given sequence of elements of the same type
as this one, copying each individual message.
"""
message_class = self._message_descriptor._concrete_class
listener = self._message_listener
values = self._values
for message in elem_seq:
new_element = message_class()
new_element._SetListener(listener)
new_element.MergeFrom(message)
values.append(new_element)
listener.Modified()
def MergeFrom(
self,
other: Union['RepeatedCompositeFieldContainer[_T]', Iterable[_T]],
) -> None:
"""Appends the contents of another repeated field of the same type to this
one, copying each individual message.
"""
self.extend(other)
def remove(self, elem: _T) -> None:
"""Removes an item from the list. Similar to list.remove()."""
self._values.remove(elem)
self._message_listener.Modified()
def pop(self, key: Optional[int] = -1) -> _T:
"""Removes and returns an item at a given index. Similar to list.pop()."""
value = self._values[key]
self.__delitem__(key)
return value
@overload
def __setitem__(self, key: int, value: _T) -> None:
...
@overload
def __setitem__(self, key: slice, value: Iterable[_T]) -> None:
...
def __setitem__(self, key, value):
# This method is implemented to make RepeatedCompositeFieldContainer
# structurally compatible with typing.MutableSequence. It is
# otherwise unsupported and will always raise an error.
raise TypeError(
f'{self.__class__.__name__} object does not support item assignment')
def __delitem__(self, key: Union[int, slice]) -> None:
"""Deletes the item at the specified position."""
del self._values[key]
self._message_listener.Modified()
def __eq__(self, other: Any) -> bool:
"""Compares the current instance with another one."""
if self is other:
return True
if not isinstance(other, self.__class__):
raise TypeError('Can only compare repeated composite fields against '
'other repeated composite fields.')
return self._values == other._values
class ScalarMap(MutableMapping[_K, _V]):
"""Simple, type-checked, dict-like container for holding repeated scalars."""
# Disallows assignment to other attributes.
__slots__ = ['_key_checker', '_value_checker', '_values', '_message_listener',
'_entry_descriptor']
def __init__(
self,
message_listener: Any,
key_checker: Any,
value_checker: Any,
entry_descriptor: Any,
) -> None:
"""
Args:
message_listener: A MessageListener implementation.
The ScalarMap will call this object's Modified() method when it
is modified.
key_checker: A type_checkers.ValueChecker instance to run on keys
inserted into this container.
value_checker: A type_checkers.ValueChecker instance to run on values
inserted into this container.
entry_descriptor: The MessageDescriptor of a map entry: key and value.
"""
self._message_listener = message_listener
self._key_checker = key_checker
self._value_checker = value_checker
self._entry_descriptor = entry_descriptor
self._values = {}
def __getitem__(self, key: _K) -> _V:
try:
return self._values[key]
except KeyError:
key = self._key_checker.CheckValue(key)
val = self._value_checker.DefaultValue()
self._values[key] = val
return val
def __contains__(self, item: _K) -> bool:
# We check the key's type to match the strong-typing flavor of the API.
# Also this makes it easier to match the behavior of the C++ implementation.
self._key_checker.CheckValue(item)
return item in self._values
@overload
def get(self, key: _K) -> Optional[_V]:
...
@overload
def get(self, key: _K, default: _T) -> Union[_V, _T]:
...
# We need to override this explicitly, because our defaultdict-like behavior
# will make the default implementation (from our base class) always insert
# the key.
def get(self, key, default=None):
if key in self:
return self[key]
else:
return default
def __setitem__(self, key: _K, value: _V) -> _T:
checked_key = self._key_checker.CheckValue(key)
checked_value = self._value_checker.CheckValue(value)
self._values[checked_key] = checked_value
self._message_listener.Modified()
def __delitem__(self, key: _K) -> None:
del self._values[key]
self._message_listener.Modified()
def __len__(self) -> int:
return len(self._values)
def __iter__(self) -> Iterator[_K]:
return iter(self._values)
def __repr__(self) -> str:
return repr(self._values)
def MergeFrom(self, other: 'ScalarMap[_K, _V]') -> None:
self._values.update(other._values)
self._message_listener.Modified()
def InvalidateIterators(self) -> None:
# It appears that the only way to reliably invalidate iterators to
# self._values is to ensure that its size changes.
original = self._values
self._values = original.copy()
original[None] = None
# This is defined in the abstract base, but we can do it much more cheaply.
def clear(self) -> None:
self._values.clear()
self._message_listener.Modified()
def GetEntryClass(self) -> Any:
return self._entry_descriptor._concrete_class
class MessageMap(MutableMapping[_K, _V]):
"""Simple, type-checked, dict-like container for with submessage values."""
# Disallows assignment to other attributes.
__slots__ = ['_key_checker', '_values', '_message_listener',
'_message_descriptor', '_entry_descriptor']
def __init__(
self,
message_listener: Any,
message_descriptor: Any,
key_checker: Any,
entry_descriptor: Any,
) -> None:
"""
Args:
message_listener: A MessageListener implementation.
The ScalarMap will call this object's Modified() method when it
is modified.
key_checker: A type_checkers.ValueChecker instance to run on keys
inserted into this container.
value_checker: A type_checkers.ValueChecker instance to run on values
inserted into this container.
entry_descriptor: The MessageDescriptor of a map entry: key and value.
"""
self._message_listener = message_listener
self._message_descriptor = message_descriptor
self._key_checker = key_checker
self._entry_descriptor = entry_descriptor
self._values = {}
def __getitem__(self, key: _K) -> _V:
key = self._key_checker.CheckValue(key)
try:
return self._values[key]
except KeyError:
new_element = self._message_descriptor._concrete_class()
new_element._SetListener(self._message_listener)
self._values[key] = new_element
self._message_listener.Modified()
return new_element
def get_or_create(self, key: _K) -> _V:
"""get_or_create() is an alias for getitem (ie. map[key]).
Args:
key: The key to get or create in the map.
This is useful in cases where you want to be explicit that the call is
mutating the map. This can avoid lint errors for statements like this
that otherwise would appear to be pointless statements:
msg.my_map[key]
"""
return self[key]
@overload
def get(self, key: _K) -> Optional[_V]:
...
@overload
def get(self, key: _K, default: _T) -> Union[_V, _T]:
...
# We need to override this explicitly, because our defaultdict-like behavior
# will make the default implementation (from our base class) always insert
# the key.
def get(self, key, default=None):
if key in self:
return self[key]
else:
return default
def __contains__(self, item: _K) -> bool:
item = self._key_checker.CheckValue(item)
return item in self._values
def __setitem__(self, key: _K, value: _V) -> NoReturn:
raise ValueError('May not set values directly, call my_map[key].foo = 5')
def __delitem__(self, key: _K) -> None:
key = self._key_checker.CheckValue(key)
del self._values[key]
self._message_listener.Modified()
def __len__(self) -> int:
return len(self._values)
def __iter__(self) -> Iterator[_K]:
return iter(self._values)
def __repr__(self) -> str:
return repr(self._values)
def MergeFrom(self, other: 'MessageMap[_K, _V]') -> None:
# pylint: disable=protected-access
for key in other._values:
# According to documentation: "When parsing from the wire or when merging,
# if there are duplicate map keys the last key seen is used".
if key in self:
del self[key]
self[key].CopyFrom(other[key])
# self._message_listener.Modified() not required here, because
# mutations to submessages already propagate.
def InvalidateIterators(self) -> None:
# It appears that the only way to reliably invalidate iterators to
# self._values is to ensure that its size changes.
original = self._values
self._values = original.copy()
original[None] = None
# This is defined in the abstract base, but we can do it much more cheaply.
def clear(self) -> None:
self._values.clear()
self._message_listener.Modified()
def GetEntryClass(self) -> Any:
return self._entry_descriptor._concrete_class
class _UnknownField:
"""A parsed unknown field."""
# Disallows assignment to other attributes.
__slots__ = ['_field_number', '_wire_type', '_data']
def __init__(self, field_number, wire_type, data):
self._field_number = field_number
self._wire_type = wire_type
self._data = data
return
def __lt__(self, other):
# pylint: disable=protected-access
return self._field_number < other._field_number
def __eq__(self, other):
if self is other:
return True
# pylint: disable=protected-access
return (self._field_number == other._field_number and
self._wire_type == other._wire_type and
self._data == other._data)
class UnknownFieldRef: # pylint: disable=missing-class-docstring
def __init__(self, parent, index):
self._parent = parent
self._index = index
def _check_valid(self):
if not self._parent:
raise ValueError('UnknownField does not exist. '
'The parent message might be cleared.')
if self._index >= len(self._parent):
raise ValueError('UnknownField does not exist. '
'The parent message might be cleared.')
@property
def field_number(self):
self._check_valid()
# pylint: disable=protected-access
return self._parent._internal_get(self._index)._field_number
@property
def wire_type(self):
self._check_valid()
# pylint: disable=protected-access
return self._parent._internal_get(self._index)._wire_type
@property
def data(self):
self._check_valid()
# pylint: disable=protected-access
return self._parent._internal_get(self._index)._data
class UnknownFieldSet:
"""UnknownField container"""
# Disallows assignment to other attributes.
__slots__ = ['_values']
def __init__(self):
self._values = []
def __getitem__(self, index):
if self._values is None:
raise ValueError('UnknownFields does not exist. '
'The parent message might be cleared.')
size = len(self._values)
if index < 0:
index += size
if index < 0 or index >= size:
raise IndexError('index %d out of range'.index)
return UnknownFieldRef(self, index)
def _internal_get(self, index):
return self._values[index]
def __len__(self):
if self._values is None:
raise ValueError('UnknownFields does not exist. '
'The parent message might be cleared.')
return len(self._values)
def _add(self, field_number, wire_type, data):
unknown_field = _UnknownField(field_number, wire_type, data)
self._values.append(unknown_field)
return unknown_field
def __iter__(self):
for i in range(len(self)):
yield UnknownFieldRef(self, i)
def _extend(self, other):
if other is None:
return
# pylint: disable=protected-access
self._values.extend(other._values)
def __eq__(self, other):
if self is other:
return True
# Sort unknown fields because their order shouldn't
# affect equality test.
values = list(self._values)
if other is None:
return not values
values.sort()
# pylint: disable=protected-access
other_values = sorted(other._values)
return values == other_values
def _clear(self):
for value in self._values:
# pylint: disable=protected-access
if isinstance(value._data, UnknownFieldSet):
value._data._clear() # pylint: disable=protected-access
self._values = None

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,829 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Code for encoding protocol message primitives.
Contains the logic for encoding every logical protocol field type
into one of the 5 physical wire types.
This code is designed to push the Python interpreter's performance to the
limits.
The basic idea is that at startup time, for every field (i.e. every
FieldDescriptor) we construct two functions: a "sizer" and an "encoder". The
sizer takes a value of this field's type and computes its byte size. The
encoder takes a writer function and a value. It encodes the value into byte
strings and invokes the writer function to write those strings. Typically the
writer function is the write() method of a BytesIO.
We try to do as much work as possible when constructing the writer and the
sizer rather than when calling them. In particular:
* We copy any needed global functions to local variables, so that we do not need
to do costly global table lookups at runtime.
* Similarly, we try to do any attribute lookups at startup time if possible.
* Every field's tag is encoded to bytes at startup, since it can't change at
runtime.
* Whatever component of the field size we can compute at startup, we do.
* We *avoid* sharing code if doing so would make the code slower and not sharing
does not burden us too much. For example, encoders for repeated fields do
not just call the encoders for singular fields in a loop because this would
add an extra function call overhead for every loop iteration; instead, we
manually inline the single-value encoder into the loop.
* If a Python function lacks a return statement, Python actually generates
instructions to pop the result of the last statement off the stack, push
None onto the stack, and then return that. If we really don't care what
value is returned, then we can save two instructions by returning the
result of the last statement. It looks funny but it helps.
* We assume that type and bounds checking has happened at a higher level.
"""
__author__ = 'kenton@google.com (Kenton Varda)'
import struct
from google.protobuf.internal import wire_format
# This will overflow and thus become IEEE-754 "infinity". We would use
# "float('inf')" but it doesn't work on Windows pre-Python-2.6.
_POS_INF = 1e10000
_NEG_INF = -_POS_INF
def _VarintSize(value):
"""Compute the size of a varint value."""
if value <= 0x7f: return 1
if value <= 0x3fff: return 2
if value <= 0x1fffff: return 3
if value <= 0xfffffff: return 4
if value <= 0x7ffffffff: return 5
if value <= 0x3ffffffffff: return 6
if value <= 0x1ffffffffffff: return 7
if value <= 0xffffffffffffff: return 8
if value <= 0x7fffffffffffffff: return 9
return 10
def _SignedVarintSize(value):
"""Compute the size of a signed varint value."""
if value < 0: return 10
if value <= 0x7f: return 1
if value <= 0x3fff: return 2
if value <= 0x1fffff: return 3
if value <= 0xfffffff: return 4
if value <= 0x7ffffffff: return 5
if value <= 0x3ffffffffff: return 6
if value <= 0x1ffffffffffff: return 7
if value <= 0xffffffffffffff: return 8
if value <= 0x7fffffffffffffff: return 9
return 10
def _TagSize(field_number):
"""Returns the number of bytes required to serialize a tag with this field
number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarintSize(wire_format.PackTag(field_number, 0))
# --------------------------------------------------------------------
# In this section we define some generic sizers. Each of these functions
# takes parameters specific to a particular field type, e.g. int32 or fixed64.
# It returns another function which in turn takes parameters specific to a
# particular field, e.g. the field number and whether it is repeated or packed.
# Look at the next section to see how these are used.
def _SimpleSizer(compute_value_size):
"""A sizer which uses the function compute_value_size to compute the size of
each value. Typically compute_value_size is _VarintSize."""
def SpecificSizer(field_number, is_repeated, is_packed):
tag_size = _TagSize(field_number)
if is_packed:
local_VarintSize = _VarintSize
def PackedFieldSize(value):
result = 0
for element in value:
result += compute_value_size(element)
return result + local_VarintSize(result) + tag_size
return PackedFieldSize
elif is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
result += compute_value_size(element)
return result
return RepeatedFieldSize
else:
def FieldSize(value):
return tag_size + compute_value_size(value)
return FieldSize
return SpecificSizer
def _ModifiedSizer(compute_value_size, modify_value):
"""Like SimpleSizer, but modify_value is invoked on each value before it is
passed to compute_value_size. modify_value is typically ZigZagEncode."""
def SpecificSizer(field_number, is_repeated, is_packed):
tag_size = _TagSize(field_number)
if is_packed:
local_VarintSize = _VarintSize
def PackedFieldSize(value):
result = 0
for element in value:
result += compute_value_size(modify_value(element))
return result + local_VarintSize(result) + tag_size
return PackedFieldSize
elif is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
result += compute_value_size(modify_value(element))
return result
return RepeatedFieldSize
else:
def FieldSize(value):
return tag_size + compute_value_size(modify_value(value))
return FieldSize
return SpecificSizer
def _FixedSizer(value_size):
"""Like _SimpleSizer except for a fixed-size field. The input is the size
of one value."""
def SpecificSizer(field_number, is_repeated, is_packed):
tag_size = _TagSize(field_number)
if is_packed:
local_VarintSize = _VarintSize
def PackedFieldSize(value):
result = len(value) * value_size
return result + local_VarintSize(result) + tag_size
return PackedFieldSize
elif is_repeated:
element_size = value_size + tag_size
def RepeatedFieldSize(value):
return len(value) * element_size
return RepeatedFieldSize
else:
field_size = value_size + tag_size
def FieldSize(value):
return field_size
return FieldSize
return SpecificSizer
# ====================================================================
# Here we declare a sizer constructor for each field type. Each "sizer
# constructor" is a function that takes (field_number, is_repeated, is_packed)
# as parameters and returns a sizer, which in turn takes a field value as
# a parameter and returns its encoded size.
Int32Sizer = Int64Sizer = EnumSizer = _SimpleSizer(_SignedVarintSize)
UInt32Sizer = UInt64Sizer = _SimpleSizer(_VarintSize)
SInt32Sizer = SInt64Sizer = _ModifiedSizer(
_SignedVarintSize, wire_format.ZigZagEncode)
Fixed32Sizer = SFixed32Sizer = FloatSizer = _FixedSizer(4)
Fixed64Sizer = SFixed64Sizer = DoubleSizer = _FixedSizer(8)
BoolSizer = _FixedSizer(1)
def StringSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a string field."""
tag_size = _TagSize(field_number)
local_VarintSize = _VarintSize
local_len = len
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
l = local_len(element.encode('utf-8'))
result += local_VarintSize(l) + l
return result
return RepeatedFieldSize
else:
def FieldSize(value):
l = local_len(value.encode('utf-8'))
return tag_size + local_VarintSize(l) + l
return FieldSize
def BytesSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a bytes field."""
tag_size = _TagSize(field_number)
local_VarintSize = _VarintSize
local_len = len
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
l = local_len(element)
result += local_VarintSize(l) + l
return result
return RepeatedFieldSize
else:
def FieldSize(value):
l = local_len(value)
return tag_size + local_VarintSize(l) + l
return FieldSize
def GroupSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a group field."""
tag_size = _TagSize(field_number) * 2
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
result += element.ByteSize()
return result
return RepeatedFieldSize
else:
def FieldSize(value):
return tag_size + value.ByteSize()
return FieldSize
def MessageSizer(field_number, is_repeated, is_packed):
"""Returns a sizer for a message field."""
tag_size = _TagSize(field_number)
local_VarintSize = _VarintSize
assert not is_packed
if is_repeated:
def RepeatedFieldSize(value):
result = tag_size * len(value)
for element in value:
l = element.ByteSize()
result += local_VarintSize(l) + l
return result
return RepeatedFieldSize
else:
def FieldSize(value):
l = value.ByteSize()
return tag_size + local_VarintSize(l) + l
return FieldSize
# --------------------------------------------------------------------
# MessageSet is special: it needs custom logic to compute its size properly.
def MessageSetItemSizer(field_number):
"""Returns a sizer for extensions of MessageSet.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
static_size = (_TagSize(1) * 2 + _TagSize(2) + _VarintSize(field_number) +
_TagSize(3))
local_VarintSize = _VarintSize
def FieldSize(value):
l = value.ByteSize()
return static_size + local_VarintSize(l) + l
return FieldSize
# --------------------------------------------------------------------
# Map is special: it needs custom logic to compute its size properly.
def MapSizer(field_descriptor, is_message_map):
"""Returns a sizer for a map field."""
# Can't look at field_descriptor.message_type._concrete_class because it may
# not have been initialized yet.
message_type = field_descriptor.message_type
message_sizer = MessageSizer(field_descriptor.number, False, False)
def FieldSize(map_value):
total = 0
for key in map_value:
value = map_value[key]
# It's wasteful to create the messages and throw them away one second
# later since we'll do the same for the actual encode. But there's not an
# obvious way to avoid this within the current design without tons of code
# duplication. For message map, value.ByteSize() should be called to
# update the status.
entry_msg = message_type._concrete_class(key=key, value=value)
total += message_sizer(entry_msg)
if is_message_map:
value.ByteSize()
return total
return FieldSize
# ====================================================================
# Encoders!
def _VarintEncoder():
"""Return an encoder for a basic varint value (does not include tag)."""
local_int2byte = struct.Struct('>B').pack
def EncodeVarint(write, value, unused_deterministic=None):
bits = value & 0x7f
value >>= 7
while value:
write(local_int2byte(0x80|bits))
bits = value & 0x7f
value >>= 7
return write(local_int2byte(bits))
return EncodeVarint
def _SignedVarintEncoder():
"""Return an encoder for a basic signed varint value (does not include
tag)."""
local_int2byte = struct.Struct('>B').pack
def EncodeSignedVarint(write, value, unused_deterministic=None):
if value < 0:
value += (1 << 64)
bits = value & 0x7f
value >>= 7
while value:
write(local_int2byte(0x80|bits))
bits = value & 0x7f
value >>= 7
return write(local_int2byte(bits))
return EncodeSignedVarint
_EncodeVarint = _VarintEncoder()
_EncodeSignedVarint = _SignedVarintEncoder()
def _VarintBytes(value):
"""Encode the given integer as a varint and return the bytes. This is only
called at startup time so it doesn't need to be fast."""
pieces = []
_EncodeVarint(pieces.append, value, True)
return b"".join(pieces)
def TagBytes(field_number, wire_type):
"""Encode the given tag and return the bytes. Only called at startup."""
return bytes(_VarintBytes(wire_format.PackTag(field_number, wire_type)))
# --------------------------------------------------------------------
# As with sizers (see above), we have a number of common encoder
# implementations.
def _SimpleEncoder(wire_type, encode_value, compute_value_size):
"""Return a constructor for an encoder for fields of a particular type.
Args:
wire_type: The field's wire type, for encoding tags.
encode_value: A function which encodes an individual value, e.g.
_EncodeVarint().
compute_value_size: A function which computes the size of an individual
value, e.g. _VarintSize().
"""
def SpecificEncoder(field_number, is_repeated, is_packed):
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
size = 0
for element in value:
size += compute_value_size(element)
local_EncodeVarint(write, size, deterministic)
for element in value:
encode_value(write, element, deterministic)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag_bytes)
encode_value(write, element, deterministic)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, deterministic):
write(tag_bytes)
return encode_value(write, value, deterministic)
return EncodeField
return SpecificEncoder
def _ModifiedEncoder(wire_type, encode_value, compute_value_size, modify_value):
"""Like SimpleEncoder but additionally invokes modify_value on every value
before passing it to encode_value. Usually modify_value is ZigZagEncode."""
def SpecificEncoder(field_number, is_repeated, is_packed):
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
size = 0
for element in value:
size += compute_value_size(modify_value(element))
local_EncodeVarint(write, size, deterministic)
for element in value:
encode_value(write, modify_value(element), deterministic)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag_bytes)
encode_value(write, modify_value(element), deterministic)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, deterministic):
write(tag_bytes)
return encode_value(write, modify_value(value), deterministic)
return EncodeField
return SpecificEncoder
def _StructPackEncoder(wire_type, format):
"""Return a constructor for an encoder for a fixed-width field.
Args:
wire_type: The field's wire type, for encoding tags.
format: The format string to pass to struct.pack().
"""
value_size = struct.calcsize(format)
def SpecificEncoder(field_number, is_repeated, is_packed):
local_struct_pack = struct.pack
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
local_EncodeVarint(write, len(value) * value_size, deterministic)
for element in value:
write(local_struct_pack(format, element))
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, unused_deterministic=None):
for element in value:
write(tag_bytes)
write(local_struct_pack(format, element))
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, unused_deterministic=None):
write(tag_bytes)
return write(local_struct_pack(format, value))
return EncodeField
return SpecificEncoder
def _FloatingPointEncoder(wire_type, format):
"""Return a constructor for an encoder for float fields.
This is like StructPackEncoder, but catches errors that may be due to
passing non-finite floating-point values to struct.pack, and makes a
second attempt to encode those values.
Args:
wire_type: The field's wire type, for encoding tags.
format: The format string to pass to struct.pack().
"""
value_size = struct.calcsize(format)
if value_size == 4:
def EncodeNonFiniteOrRaise(write, value):
# Remember that the serialized form uses little-endian byte order.
if value == _POS_INF:
write(b'\x00\x00\x80\x7F')
elif value == _NEG_INF:
write(b'\x00\x00\x80\xFF')
elif value != value: # NaN
write(b'\x00\x00\xC0\x7F')
else:
raise
elif value_size == 8:
def EncodeNonFiniteOrRaise(write, value):
if value == _POS_INF:
write(b'\x00\x00\x00\x00\x00\x00\xF0\x7F')
elif value == _NEG_INF:
write(b'\x00\x00\x00\x00\x00\x00\xF0\xFF')
elif value != value: # NaN
write(b'\x00\x00\x00\x00\x00\x00\xF8\x7F')
else:
raise
else:
raise ValueError('Can\'t encode floating-point values that are '
'%d bytes long (only 4 or 8)' % value_size)
def SpecificEncoder(field_number, is_repeated, is_packed):
local_struct_pack = struct.pack
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
local_EncodeVarint(write, len(value) * value_size, deterministic)
for element in value:
# This try/except block is going to be faster than any code that
# we could write to check whether element is finite.
try:
write(local_struct_pack(format, element))
except SystemError:
EncodeNonFiniteOrRaise(write, element)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeRepeatedField(write, value, unused_deterministic=None):
for element in value:
write(tag_bytes)
try:
write(local_struct_pack(format, element))
except SystemError:
EncodeNonFiniteOrRaise(write, element)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_type)
def EncodeField(write, value, unused_deterministic=None):
write(tag_bytes)
try:
write(local_struct_pack(format, value))
except SystemError:
EncodeNonFiniteOrRaise(write, value)
return EncodeField
return SpecificEncoder
# ====================================================================
# Here we declare an encoder constructor for each field type. These work
# very similarly to sizer constructors, described earlier.
Int32Encoder = Int64Encoder = EnumEncoder = _SimpleEncoder(
wire_format.WIRETYPE_VARINT, _EncodeSignedVarint, _SignedVarintSize)
UInt32Encoder = UInt64Encoder = _SimpleEncoder(
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize)
SInt32Encoder = SInt64Encoder = _ModifiedEncoder(
wire_format.WIRETYPE_VARINT, _EncodeVarint, _VarintSize,
wire_format.ZigZagEncode)
# Note that Python conveniently guarantees that when using the '<' prefix on
# formats, they will also have the same size across all platforms (as opposed
# to without the prefix, where their sizes depend on the C compiler's basic
# type sizes).
Fixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<I')
Fixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<Q')
SFixed32Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED32, '<i')
SFixed64Encoder = _StructPackEncoder(wire_format.WIRETYPE_FIXED64, '<q')
FloatEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED32, '<f')
DoubleEncoder = _FloatingPointEncoder(wire_format.WIRETYPE_FIXED64, '<d')
def BoolEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a boolean field."""
false_byte = b'\x00'
true_byte = b'\x01'
if is_packed:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
def EncodePackedField(write, value, deterministic):
write(tag_bytes)
local_EncodeVarint(write, len(value), deterministic)
for element in value:
if element:
write(true_byte)
else:
write(false_byte)
return EncodePackedField
elif is_repeated:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
def EncodeRepeatedField(write, value, unused_deterministic=None):
for element in value:
write(tag_bytes)
if element:
write(true_byte)
else:
write(false_byte)
return EncodeRepeatedField
else:
tag_bytes = TagBytes(field_number, wire_format.WIRETYPE_VARINT)
def EncodeField(write, value, unused_deterministic=None):
write(tag_bytes)
if value:
return write(true_byte)
return write(false_byte)
return EncodeField
def StringEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a string field."""
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
local_len = len
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
encoded = element.encode('utf-8')
write(tag)
local_EncodeVarint(write, local_len(encoded), deterministic)
write(encoded)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
encoded = value.encode('utf-8')
write(tag)
local_EncodeVarint(write, local_len(encoded), deterministic)
return write(encoded)
return EncodeField
def BytesEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a bytes field."""
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
local_len = len
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag)
local_EncodeVarint(write, local_len(element), deterministic)
write(element)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
write(tag)
local_EncodeVarint(write, local_len(value), deterministic)
return write(value)
return EncodeField
def GroupEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a group field."""
start_tag = TagBytes(field_number, wire_format.WIRETYPE_START_GROUP)
end_tag = TagBytes(field_number, wire_format.WIRETYPE_END_GROUP)
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(start_tag)
element._InternalSerialize(write, deterministic)
write(end_tag)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
write(start_tag)
value._InternalSerialize(write, deterministic)
return write(end_tag)
return EncodeField
def MessageEncoder(field_number, is_repeated, is_packed):
"""Returns an encoder for a message field."""
tag = TagBytes(field_number, wire_format.WIRETYPE_LENGTH_DELIMITED)
local_EncodeVarint = _EncodeVarint
assert not is_packed
if is_repeated:
def EncodeRepeatedField(write, value, deterministic):
for element in value:
write(tag)
local_EncodeVarint(write, element.ByteSize(), deterministic)
element._InternalSerialize(write, deterministic)
return EncodeRepeatedField
else:
def EncodeField(write, value, deterministic):
write(tag)
local_EncodeVarint(write, value.ByteSize(), deterministic)
return value._InternalSerialize(write, deterministic)
return EncodeField
# --------------------------------------------------------------------
# As before, MessageSet is special.
def MessageSetItemEncoder(field_number):
"""Encoder for extensions of MessageSet.
The message set message looks like this:
message MessageSet {
repeated group Item = 1 {
required int32 type_id = 2;
required string message = 3;
}
}
"""
start_bytes = b"".join([
TagBytes(1, wire_format.WIRETYPE_START_GROUP),
TagBytes(2, wire_format.WIRETYPE_VARINT),
_VarintBytes(field_number),
TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED)])
end_bytes = TagBytes(1, wire_format.WIRETYPE_END_GROUP)
local_EncodeVarint = _EncodeVarint
def EncodeField(write, value, deterministic):
write(start_bytes)
local_EncodeVarint(write, value.ByteSize(), deterministic)
value._InternalSerialize(write, deterministic)
return write(end_bytes)
return EncodeField
# --------------------------------------------------------------------
# As before, Map is special.
def MapEncoder(field_descriptor):
"""Encoder for extensions of MessageSet.
Maps always have a wire format like this:
message MapEntry {
key_type key = 1;
value_type value = 2;
}
repeated MapEntry map = N;
"""
# Can't look at field_descriptor.message_type._concrete_class because it may
# not have been initialized yet.
message_type = field_descriptor.message_type
encode_message = MessageEncoder(field_descriptor.number, False, False)
def EncodeField(write, value, deterministic):
value_keys = sorted(value.keys()) if deterministic else value
for key in value_keys:
entry_msg = message_type._concrete_class(key=key, value=value[key])
encode_message(write, entry_msg, deterministic)
return EncodeField

View file

@ -0,0 +1,124 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""A simple wrapper around enum types to expose utility functions.
Instances are created as properties with the same name as the enum they wrap
on proto classes. For usage, see:
reflection_test.py
"""
__author__ = 'rabsatt@google.com (Kevin Rabsatt)'
class EnumTypeWrapper(object):
"""A utility for finding the names of enum values."""
DESCRIPTOR = None
# This is a type alias, which mypy typing stubs can type as
# a genericized parameter constrained to an int, allowing subclasses
# to be typed with more constraint in .pyi stubs
# Eg.
# def MyGeneratedEnum(Message):
# ValueType = NewType('ValueType', int)
# def Name(self, number: MyGeneratedEnum.ValueType) -> str
ValueType = int
def __init__(self, enum_type):
"""Inits EnumTypeWrapper with an EnumDescriptor."""
self._enum_type = enum_type
self.DESCRIPTOR = enum_type # pylint: disable=invalid-name
def Name(self, number): # pylint: disable=invalid-name
"""Returns a string containing the name of an enum value."""
try:
return self._enum_type.values_by_number[number].name
except KeyError:
pass # fall out to break exception chaining
if not isinstance(number, int):
raise TypeError(
'Enum value for {} must be an int, but got {} {!r}.'.format(
self._enum_type.name, type(number), number))
else:
# repr here to handle the odd case when you pass in a boolean.
raise ValueError('Enum {} has no name defined for value {!r}'.format(
self._enum_type.name, number))
def Value(self, name): # pylint: disable=invalid-name
"""Returns the value corresponding to the given enum name."""
try:
return self._enum_type.values_by_name[name].number
except KeyError:
pass # fall out to break exception chaining
raise ValueError('Enum {} has no value defined for name {!r}'.format(
self._enum_type.name, name))
def keys(self):
"""Return a list of the string names in the enum.
Returns:
A list of strs, in the order they were defined in the .proto file.
"""
return [value_descriptor.name
for value_descriptor in self._enum_type.values]
def values(self):
"""Return a list of the integer values in the enum.
Returns:
A list of ints, in the order they were defined in the .proto file.
"""
return [value_descriptor.number
for value_descriptor in self._enum_type.values]
def items(self):
"""Return a list of the (name, value) pairs of the enum.
Returns:
A list of (str, int) pairs, in the order they were defined
in the .proto file.
"""
return [(value_descriptor.name, value_descriptor.number)
for value_descriptor in self._enum_type.values]
def __getattr__(self, name):
"""Returns the value corresponding to the given enum name."""
try:
return super(
EnumTypeWrapper,
self).__getattribute__('_enum_type').values_by_name[name].number
except KeyError:
pass # fall out to break exception chaining
raise AttributeError('Enum {} has no value defined for name {!r}'.format(
self._enum_type.name, name))

View file

@ -0,0 +1,213 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains _ExtensionDict class to represent extensions.
"""
from google.protobuf.internal import type_checkers
from google.protobuf.descriptor import FieldDescriptor
def _VerifyExtensionHandle(message, extension_handle):
"""Verify that the given extension handle is valid."""
if not isinstance(extension_handle, FieldDescriptor):
raise KeyError('HasExtension() expects an extension handle, got: %s' %
extension_handle)
if not extension_handle.is_extension:
raise KeyError('"%s" is not an extension.' % extension_handle.full_name)
if not extension_handle.containing_type:
raise KeyError('"%s" is missing a containing_type.'
% extension_handle.full_name)
if extension_handle.containing_type is not message.DESCRIPTOR:
raise KeyError('Extension "%s" extends message type "%s", but this '
'message is of type "%s".' %
(extension_handle.full_name,
extension_handle.containing_type.full_name,
message.DESCRIPTOR.full_name))
# TODO(robinson): Unify error handling of "unknown extension" crap.
# TODO(robinson): Support iteritems()-style iteration over all
# extensions with the "has" bits turned on?
class _ExtensionDict(object):
"""Dict-like container for Extension fields on proto instances.
Note that in all cases we expect extension handles to be
FieldDescriptors.
"""
def __init__(self, extended_message):
"""
Args:
extended_message: Message instance for which we are the Extensions dict.
"""
self._extended_message = extended_message
def __getitem__(self, extension_handle):
"""Returns the current value of the given extension handle."""
_VerifyExtensionHandle(self._extended_message, extension_handle)
result = self._extended_message._fields.get(extension_handle)
if result is not None:
return result
if extension_handle.label == FieldDescriptor.LABEL_REPEATED:
result = extension_handle._default_constructor(self._extended_message)
elif extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
message_type = extension_handle.message_type
if not hasattr(message_type, '_concrete_class'):
# pylint: disable=protected-access
self._extended_message._FACTORY.GetPrototype(message_type)
assert getattr(extension_handle.message_type, '_concrete_class', None), (
'Uninitialized concrete class found for field %r (message type %r)'
% (extension_handle.full_name,
extension_handle.message_type.full_name))
result = extension_handle.message_type._concrete_class()
try:
result._SetListener(self._extended_message._listener_for_children)
except ReferenceError:
pass
else:
# Singular scalar -- just return the default without inserting into the
# dict.
return extension_handle.default_value
# Atomically check if another thread has preempted us and, if not, swap
# in the new object we just created. If someone has preempted us, we
# take that object and discard ours.
# WARNING: We are relying on setdefault() being atomic. This is true
# in CPython but we haven't investigated others. This warning appears
# in several other locations in this file.
result = self._extended_message._fields.setdefault(
extension_handle, result)
return result
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
my_fields = self._extended_message.ListFields()
other_fields = other._extended_message.ListFields()
# Get rid of non-extension fields.
my_fields = [field for field in my_fields if field.is_extension]
other_fields = [field for field in other_fields if field.is_extension]
return my_fields == other_fields
def __ne__(self, other):
return not self == other
def __len__(self):
fields = self._extended_message.ListFields()
# Get rid of non-extension fields.
extension_fields = [field for field in fields if field[0].is_extension]
return len(extension_fields)
def __hash__(self):
raise TypeError('unhashable object')
# Note that this is only meaningful for non-repeated, scalar extension
# fields. Note also that we may have to call _Modified() when we do
# successfully set a field this way, to set any necessary "has" bits in the
# ancestors of the extended message.
def __setitem__(self, extension_handle, value):
"""If extension_handle specifies a non-repeated, scalar extension
field, sets the value of that field.
"""
_VerifyExtensionHandle(self._extended_message, extension_handle)
if (extension_handle.label == FieldDescriptor.LABEL_REPEATED or
extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE):
raise TypeError(
'Cannot assign to extension "%s" because it is a repeated or '
'composite type.' % extension_handle.full_name)
# It's slightly wasteful to lookup the type checker each time,
# but we expect this to be a vanishingly uncommon case anyway.
type_checker = type_checkers.GetTypeChecker(extension_handle)
# pylint: disable=protected-access
self._extended_message._fields[extension_handle] = (
type_checker.CheckValue(value))
self._extended_message._Modified()
def __delitem__(self, extension_handle):
self._extended_message.ClearExtension(extension_handle)
def _FindExtensionByName(self, name):
"""Tries to find a known extension with the specified name.
Args:
name: Extension full name.
Returns:
Extension field descriptor.
"""
return self._extended_message._extensions_by_name.get(name, None)
def _FindExtensionByNumber(self, number):
"""Tries to find a known extension with the field number.
Args:
number: Extension field number.
Returns:
Extension field descriptor.
"""
return self._extended_message._extensions_by_number.get(number, None)
def __iter__(self):
# Return a generator over the populated extension fields
return (f[0] for f in self._extended_message.ListFields()
if f[0].is_extension)
def __contains__(self, extension_handle):
_VerifyExtensionHandle(self._extended_message, extension_handle)
if extension_handle not in self._extended_message._fields:
return False
if extension_handle.label == FieldDescriptor.LABEL_REPEATED:
return bool(self._extended_message._fields.get(extension_handle))
if extension_handle.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
value = self._extended_message._fields.get(extension_handle)
# pylint: disable=protected-access
return value is not None and value._is_present_in_parent
return True

View file

@ -0,0 +1,78 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Defines a listener interface for observing certain
state transitions on Message objects.
Also defines a null implementation of this interface.
"""
__author__ = 'robinson@google.com (Will Robinson)'
class MessageListener(object):
"""Listens for modifications made to a message. Meant to be registered via
Message._SetListener().
Attributes:
dirty: If True, then calling Modified() would be a no-op. This can be
used to avoid these calls entirely in the common case.
"""
def Modified(self):
"""Called every time the message is modified in such a way that the parent
message may need to be updated. This currently means either:
(a) The message was modified for the first time, so the parent message
should henceforth mark the message as present.
(b) The message's cached byte size became dirty -- i.e. the message was
modified for the first time after a previous call to ByteSize().
Therefore the parent should also mark its byte size as dirty.
Note that (a) implies (b), since new objects start out with a client cached
size (zero). However, we document (a) explicitly because it is important.
Modified() will *only* be called in response to one of these two events --
not every time the sub-message is modified.
Note that if the listener's |dirty| attribute is true, then calling
Modified at the moment would be a no-op, so it can be skipped. Performance-
sensitive callers should check this attribute directly before calling since
it will be true most of the time.
"""
raise NotImplementedError
class NullMessageListener(object):
"""No-op MessageListener implementation."""
def Modified(self):
pass

View file

@ -0,0 +1,36 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/message_set_extensions.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n5google/protobuf/internal/message_set_extensions.proto\x12\x18google.protobuf.internal\"\x1e\n\x0eTestMessageSet*\x08\x08\x04\x10\xff\xff\xff\xff\x07:\x02\x08\x01\"\xa5\x01\n\x18TestMessageSetExtension1\x12\t\n\x01i\x18\x0f \x01(\x05\x32~\n\x15message_set_extension\x12(.google.protobuf.internal.TestMessageSet\x18\xab\xff\xf6. \x01(\x0b\x32\x32.google.protobuf.internal.TestMessageSetExtension1\"\xa7\x01\n\x18TestMessageSetExtension2\x12\x0b\n\x03str\x18\x19 \x01(\t2~\n\x15message_set_extension\x12(.google.protobuf.internal.TestMessageSet\x18\xca\xff\xf6. \x01(\x0b\x32\x32.google.protobuf.internal.TestMessageSetExtension2\"(\n\x18TestMessageSetExtension3\x12\x0c\n\x04text\x18# \x01(\t:\x7f\n\x16message_set_extension3\x12(.google.protobuf.internal.TestMessageSet\x18\xdf\xff\xf6. \x01(\x0b\x32\x32.google.protobuf.internal.TestMessageSetExtension3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.message_set_extensions_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
TestMessageSet.RegisterExtension(message_set_extension3)
TestMessageSet.RegisterExtension(_TESTMESSAGESETEXTENSION1.extensions_by_name['message_set_extension'])
TestMessageSet.RegisterExtension(_TESTMESSAGESETEXTENSION2.extensions_by_name['message_set_extension'])
DESCRIPTOR._options = None
_TESTMESSAGESET._options = None
_TESTMESSAGESET._serialized_options = b'\010\001'
_TESTMESSAGESET._serialized_start=83
_TESTMESSAGESET._serialized_end=113
_TESTMESSAGESETEXTENSION1._serialized_start=116
_TESTMESSAGESETEXTENSION1._serialized_end=281
_TESTMESSAGESETEXTENSION2._serialized_start=284
_TESTMESSAGESETEXTENSION2._serialized_end=451
_TESTMESSAGESETEXTENSION3._serialized_start=453
_TESTMESSAGESETEXTENSION3._serialized_end=493
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,37 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/missing_enum_values.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n2google/protobuf/internal/missing_enum_values.proto\x12\x1fgoogle.protobuf.python.internal\"\xc1\x02\n\x0eTestEnumValues\x12X\n\x14optional_nested_enum\x18\x01 \x01(\x0e\x32:.google.protobuf.python.internal.TestEnumValues.NestedEnum\x12X\n\x14repeated_nested_enum\x18\x02 \x03(\x0e\x32:.google.protobuf.python.internal.TestEnumValues.NestedEnum\x12Z\n\x12packed_nested_enum\x18\x03 \x03(\x0e\x32:.google.protobuf.python.internal.TestEnumValues.NestedEnumB\x02\x10\x01\"\x1f\n\nNestedEnum\x12\x08\n\x04ZERO\x10\x00\x12\x07\n\x03ONE\x10\x01\"\xd3\x02\n\x15TestMissingEnumValues\x12_\n\x14optional_nested_enum\x18\x01 \x01(\x0e\x32\x41.google.protobuf.python.internal.TestMissingEnumValues.NestedEnum\x12_\n\x14repeated_nested_enum\x18\x02 \x03(\x0e\x32\x41.google.protobuf.python.internal.TestMissingEnumValues.NestedEnum\x12\x61\n\x12packed_nested_enum\x18\x03 \x03(\x0e\x32\x41.google.protobuf.python.internal.TestMissingEnumValues.NestedEnumB\x02\x10\x01\"\x15\n\nNestedEnum\x12\x07\n\x03TWO\x10\x02\"\x1b\n\nJustString\x12\r\n\x05\x64ummy\x18\x01 \x02(\t')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.missing_enum_values_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_TESTENUMVALUES.fields_by_name['packed_nested_enum']._options = None
_TESTENUMVALUES.fields_by_name['packed_nested_enum']._serialized_options = b'\020\001'
_TESTMISSINGENUMVALUES.fields_by_name['packed_nested_enum']._options = None
_TESTMISSINGENUMVALUES.fields_by_name['packed_nested_enum']._serialized_options = b'\020\001'
_TESTENUMVALUES._serialized_start=88
_TESTENUMVALUES._serialized_end=409
_TESTENUMVALUES_NESTEDENUM._serialized_start=378
_TESTENUMVALUES_NESTEDENUM._serialized_end=409
_TESTMISSINGENUMVALUES._serialized_start=412
_TESTMISSINGENUMVALUES._serialized_end=751
_TESTMISSINGENUMVALUES_NESTEDENUM._serialized_start=730
_TESTMISSINGENUMVALUES_NESTEDENUM._serialized_end=751
_JUSTSTRING._serialized_start=753
_JUSTSTRING._serialized_end=780
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,29 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/more_extensions_dynamic.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf.internal import more_extensions_pb2 as google_dot_protobuf_dot_internal_dot_more__extensions__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n6google/protobuf/internal/more_extensions_dynamic.proto\x12\x18google.protobuf.internal\x1a.google/protobuf/internal/more_extensions.proto\"\x1f\n\x12\x44ynamicMessageType\x12\t\n\x01\x61\x18\x01 \x01(\x05:J\n\x17\x64ynamic_int32_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x64 \x01(\x05:z\n\x19\x64ynamic_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x65 \x01(\x0b\x32,.google.protobuf.internal.DynamicMessageType:\x83\x01\n\"repeated_dynamic_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x66 \x03(\x0b\x32,.google.protobuf.internal.DynamicMessageType')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.more_extensions_dynamic_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(dynamic_int32_extension)
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(dynamic_message_extension)
google_dot_protobuf_dot_internal_dot_more__extensions__pb2.ExtendedMessage.RegisterExtension(repeated_dynamic_message_extension)
DESCRIPTOR._options = None
_DYNAMICMESSAGETYPE._serialized_start=132
_DYNAMICMESSAGETYPE._serialized_end=163
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,41 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/more_extensions.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n.google/protobuf/internal/more_extensions.proto\x12\x18google.protobuf.internal\"\x99\x01\n\x0fTopLevelMessage\x12\x41\n\nsubmessage\x18\x01 \x01(\x0b\x32).google.protobuf.internal.ExtendedMessageB\x02(\x01\x12\x43\n\x0enested_message\x18\x02 \x01(\x0b\x32\'.google.protobuf.internal.NestedMessageB\x02(\x01\"R\n\rNestedMessage\x12\x41\n\nsubmessage\x18\x01 \x01(\x0b\x32).google.protobuf.internal.ExtendedMessageB\x02(\x01\"K\n\x0f\x45xtendedMessage\x12\x17\n\x0eoptional_int32\x18\xe9\x07 \x01(\x05\x12\x18\n\x0frepeated_string\x18\xea\x07 \x03(\t*\x05\x08\x01\x10\xe8\x07\"-\n\x0e\x46oreignMessage\x12\x1b\n\x13\x66oreign_message_int\x18\x01 \x01(\x05:I\n\x16optional_int_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x01 \x01(\x05:w\n\x1aoptional_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x02 \x01(\x0b\x32(.google.protobuf.internal.ForeignMessage:I\n\x16repeated_int_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x03 \x03(\x05:w\n\x1arepeated_message_extension\x12).google.protobuf.internal.ExtendedMessage\x18\x04 \x03(\x0b\x32(.google.protobuf.internal.ForeignMessage')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.more_extensions_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
ExtendedMessage.RegisterExtension(optional_int_extension)
ExtendedMessage.RegisterExtension(optional_message_extension)
ExtendedMessage.RegisterExtension(repeated_int_extension)
ExtendedMessage.RegisterExtension(repeated_message_extension)
DESCRIPTOR._options = None
_TOPLEVELMESSAGE.fields_by_name['submessage']._options = None
_TOPLEVELMESSAGE.fields_by_name['submessage']._serialized_options = b'(\001'
_TOPLEVELMESSAGE.fields_by_name['nested_message']._options = None
_TOPLEVELMESSAGE.fields_by_name['nested_message']._serialized_options = b'(\001'
_NESTEDMESSAGE.fields_by_name['submessage']._options = None
_NESTEDMESSAGE.fields_by_name['submessage']._serialized_options = b'(\001'
_TOPLEVELMESSAGE._serialized_start=77
_TOPLEVELMESSAGE._serialized_end=230
_NESTEDMESSAGE._serialized_start=232
_NESTEDMESSAGE._serialized_end=314
_EXTENDEDMESSAGE._serialized_start=316
_EXTENDEDMESSAGE._serialized_end=391
_FOREIGNMESSAGE._serialized_start=393
_FOREIGNMESSAGE._serialized_end=438
# @@protoc_insertion_point(module_scope)

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,27 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/internal/no_package.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n)google/protobuf/internal/no_package.proto\";\n\x10NoPackageMessage\x12\'\n\x0fno_package_enum\x18\x01 \x01(\x0e\x32\x0e.NoPackageEnum*?\n\rNoPackageEnum\x12\x16\n\x12NO_PACKAGE_VALUE_0\x10\x00\x12\x16\n\x12NO_PACKAGE_VALUE_1\x10\x01')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.internal.no_package_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
_NOPACKAGEENUM._serialized_start=106
_NOPACKAGEENUM._serialized_end=169
_NOPACKAGEMESSAGE._serialized_start=45
_NOPACKAGEMESSAGE._serialized_end=104
# @@protoc_insertion_point(module_scope)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,435 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides type checking routines.
This module defines type checking utilities in the forms of dictionaries:
VALUE_CHECKERS: A dictionary of field types and a value validation object.
TYPE_TO_BYTE_SIZE_FN: A dictionary with field types and a size computing
function.
TYPE_TO_SERIALIZE_METHOD: A dictionary with field types and serialization
function.
FIELD_TYPE_TO_WIRE_TYPE: A dictionary with field typed and their
corresponding wire types.
TYPE_TO_DESERIALIZE_METHOD: A dictionary with field types and deserialization
function.
"""
__author__ = 'robinson@google.com (Will Robinson)'
import ctypes
import numbers
from google.protobuf.internal import decoder
from google.protobuf.internal import encoder
from google.protobuf.internal import wire_format
from google.protobuf import descriptor
_FieldDescriptor = descriptor.FieldDescriptor
def TruncateToFourByteFloat(original):
return ctypes.c_float(original).value
def ToShortestFloat(original):
"""Returns the shortest float that has same value in wire."""
# All 4 byte floats have between 6 and 9 significant digits, so we
# start with 6 as the lower bound.
# It has to be iterative because use '.9g' directly can not get rid
# of the noises for most values. For example if set a float_field=0.9
# use '.9g' will print 0.899999976.
precision = 6
rounded = float('{0:.{1}g}'.format(original, precision))
while TruncateToFourByteFloat(rounded) != original:
precision += 1
rounded = float('{0:.{1}g}'.format(original, precision))
return rounded
def SupportsOpenEnums(field_descriptor):
return field_descriptor.containing_type.syntax == 'proto3'
def GetTypeChecker(field):
"""Returns a type checker for a message field of the specified types.
Args:
field: FieldDescriptor object for this field.
Returns:
An instance of TypeChecker which can be used to verify the types
of values assigned to a field of the specified type.
"""
if (field.cpp_type == _FieldDescriptor.CPPTYPE_STRING and
field.type == _FieldDescriptor.TYPE_STRING):
return UnicodeValueChecker()
if field.cpp_type == _FieldDescriptor.CPPTYPE_ENUM:
if SupportsOpenEnums(field):
# When open enums are supported, any int32 can be assigned.
return _VALUE_CHECKERS[_FieldDescriptor.CPPTYPE_INT32]
else:
return EnumValueChecker(field.enum_type)
return _VALUE_CHECKERS[field.cpp_type]
# None of the typecheckers below make any attempt to guard against people
# subclassing builtin types and doing weird things. We're not trying to
# protect against malicious clients here, just people accidentally shooting
# themselves in the foot in obvious ways.
class TypeChecker(object):
"""Type checker used to catch type errors as early as possible
when the client is setting scalar fields in protocol messages.
"""
def __init__(self, *acceptable_types):
self._acceptable_types = acceptable_types
def CheckValue(self, proposed_value):
"""Type check the provided value and return it.
The returned value might have been normalized to another type.
"""
if not isinstance(proposed_value, self._acceptable_types):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), self._acceptable_types))
raise TypeError(message)
return proposed_value
class TypeCheckerWithDefault(TypeChecker):
def __init__(self, default_value, *acceptable_types):
TypeChecker.__init__(self, *acceptable_types)
self._default_value = default_value
def DefaultValue(self):
return self._default_value
class BoolValueChecker(object):
"""Type checker used for bool fields."""
def CheckValue(self, proposed_value):
if not hasattr(proposed_value, '__index__') or (
type(proposed_value).__module__ == 'numpy' and
type(proposed_value).__name__ == 'ndarray'):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (bool, int)))
raise TypeError(message)
return bool(proposed_value)
def DefaultValue(self):
return False
# IntValueChecker and its subclasses perform integer type-checks
# and bounds-checks.
class IntValueChecker(object):
"""Checker used for integer fields. Performs type-check and range check."""
def CheckValue(self, proposed_value):
if not hasattr(proposed_value, '__index__') or (
type(proposed_value).__module__ == 'numpy' and
type(proposed_value).__name__ == 'ndarray'):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (int,)))
raise TypeError(message)
if not self._MIN <= int(proposed_value) <= self._MAX:
raise ValueError('Value out of range: %d' % proposed_value)
# We force all values to int to make alternate implementations where the
# distinction is more significant (e.g. the C++ implementation) simpler.
proposed_value = int(proposed_value)
return proposed_value
def DefaultValue(self):
return 0
class EnumValueChecker(object):
"""Checker used for enum fields. Performs type-check and range check."""
def __init__(self, enum_type):
self._enum_type = enum_type
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, numbers.Integral):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (int,)))
raise TypeError(message)
if int(proposed_value) not in self._enum_type.values_by_number:
raise ValueError('Unknown enum value: %d' % proposed_value)
return proposed_value
def DefaultValue(self):
return self._enum_type.values[0].number
class UnicodeValueChecker(object):
"""Checker used for string fields.
Always returns a unicode value, even if the input is of type str.
"""
def CheckValue(self, proposed_value):
if not isinstance(proposed_value, (bytes, str)):
message = ('%.1024r has type %s, but expected one of: %s' %
(proposed_value, type(proposed_value), (bytes, str)))
raise TypeError(message)
# If the value is of type 'bytes' make sure that it is valid UTF-8 data.
if isinstance(proposed_value, bytes):
try:
proposed_value = proposed_value.decode('utf-8')
except UnicodeDecodeError:
raise ValueError('%.1024r has type bytes, but isn\'t valid UTF-8 '
'encoding. Non-UTF-8 strings must be converted to '
'unicode objects before being added.' %
(proposed_value))
else:
try:
proposed_value.encode('utf8')
except UnicodeEncodeError:
raise ValueError('%.1024r isn\'t a valid unicode string and '
'can\'t be encoded in UTF-8.'%
(proposed_value))
return proposed_value
def DefaultValue(self):
return u""
class Int32ValueChecker(IntValueChecker):
# We're sure to use ints instead of longs here since comparison may be more
# efficient.
_MIN = -2147483648
_MAX = 2147483647
class Uint32ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 32) - 1
class Int64ValueChecker(IntValueChecker):
_MIN = -(1 << 63)
_MAX = (1 << 63) - 1
class Uint64ValueChecker(IntValueChecker):
_MIN = 0
_MAX = (1 << 64) - 1
# The max 4 bytes float is about 3.4028234663852886e+38
_FLOAT_MAX = float.fromhex('0x1.fffffep+127')
_FLOAT_MIN = -_FLOAT_MAX
_INF = float('inf')
_NEG_INF = float('-inf')
class DoubleValueChecker(object):
"""Checker used for double fields.
Performs type-check and range check.
"""
def CheckValue(self, proposed_value):
"""Check and convert proposed_value to float."""
if (not hasattr(proposed_value, '__float__') and
not hasattr(proposed_value, '__index__')) or (
type(proposed_value).__module__ == 'numpy' and
type(proposed_value).__name__ == 'ndarray'):
message = ('%.1024r has type %s, but expected one of: int, float' %
(proposed_value, type(proposed_value)))
raise TypeError(message)
return float(proposed_value)
def DefaultValue(self):
return 0.0
class FloatValueChecker(DoubleValueChecker):
"""Checker used for float fields.
Performs type-check and range check.
Values exceeding a 32-bit float will be converted to inf/-inf.
"""
def CheckValue(self, proposed_value):
"""Check and convert proposed_value to float."""
converted_value = super().CheckValue(proposed_value)
# This inf rounding matches the C++ proto SafeDoubleToFloat logic.
if converted_value > _FLOAT_MAX:
return _INF
if converted_value < _FLOAT_MIN:
return _NEG_INF
return TruncateToFourByteFloat(converted_value)
# Type-checkers for all scalar CPPTYPEs.
_VALUE_CHECKERS = {
_FieldDescriptor.CPPTYPE_INT32: Int32ValueChecker(),
_FieldDescriptor.CPPTYPE_INT64: Int64ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT32: Uint32ValueChecker(),
_FieldDescriptor.CPPTYPE_UINT64: Uint64ValueChecker(),
_FieldDescriptor.CPPTYPE_DOUBLE: DoubleValueChecker(),
_FieldDescriptor.CPPTYPE_FLOAT: FloatValueChecker(),
_FieldDescriptor.CPPTYPE_BOOL: BoolValueChecker(),
_FieldDescriptor.CPPTYPE_STRING: TypeCheckerWithDefault(b'', bytes),
}
# Map from field type to a function F, such that F(field_num, value)
# gives the total byte size for a value of the given type. This
# byte size includes tag information and any other additional space
# associated with serializing "value".
TYPE_TO_BYTE_SIZE_FN = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.DoubleByteSize,
_FieldDescriptor.TYPE_FLOAT: wire_format.FloatByteSize,
_FieldDescriptor.TYPE_INT64: wire_format.Int64ByteSize,
_FieldDescriptor.TYPE_UINT64: wire_format.UInt64ByteSize,
_FieldDescriptor.TYPE_INT32: wire_format.Int32ByteSize,
_FieldDescriptor.TYPE_FIXED64: wire_format.Fixed64ByteSize,
_FieldDescriptor.TYPE_FIXED32: wire_format.Fixed32ByteSize,
_FieldDescriptor.TYPE_BOOL: wire_format.BoolByteSize,
_FieldDescriptor.TYPE_STRING: wire_format.StringByteSize,
_FieldDescriptor.TYPE_GROUP: wire_format.GroupByteSize,
_FieldDescriptor.TYPE_MESSAGE: wire_format.MessageByteSize,
_FieldDescriptor.TYPE_BYTES: wire_format.BytesByteSize,
_FieldDescriptor.TYPE_UINT32: wire_format.UInt32ByteSize,
_FieldDescriptor.TYPE_ENUM: wire_format.EnumByteSize,
_FieldDescriptor.TYPE_SFIXED32: wire_format.SFixed32ByteSize,
_FieldDescriptor.TYPE_SFIXED64: wire_format.SFixed64ByteSize,
_FieldDescriptor.TYPE_SINT32: wire_format.SInt32ByteSize,
_FieldDescriptor.TYPE_SINT64: wire_format.SInt64ByteSize
}
# Maps from field types to encoder constructors.
TYPE_TO_ENCODER = {
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleEncoder,
_FieldDescriptor.TYPE_FLOAT: encoder.FloatEncoder,
_FieldDescriptor.TYPE_INT64: encoder.Int64Encoder,
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Encoder,
_FieldDescriptor.TYPE_INT32: encoder.Int32Encoder,
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Encoder,
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Encoder,
_FieldDescriptor.TYPE_BOOL: encoder.BoolEncoder,
_FieldDescriptor.TYPE_STRING: encoder.StringEncoder,
_FieldDescriptor.TYPE_GROUP: encoder.GroupEncoder,
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageEncoder,
_FieldDescriptor.TYPE_BYTES: encoder.BytesEncoder,
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Encoder,
_FieldDescriptor.TYPE_ENUM: encoder.EnumEncoder,
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Encoder,
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Encoder,
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Encoder,
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Encoder,
}
# Maps from field types to sizer constructors.
TYPE_TO_SIZER = {
_FieldDescriptor.TYPE_DOUBLE: encoder.DoubleSizer,
_FieldDescriptor.TYPE_FLOAT: encoder.FloatSizer,
_FieldDescriptor.TYPE_INT64: encoder.Int64Sizer,
_FieldDescriptor.TYPE_UINT64: encoder.UInt64Sizer,
_FieldDescriptor.TYPE_INT32: encoder.Int32Sizer,
_FieldDescriptor.TYPE_FIXED64: encoder.Fixed64Sizer,
_FieldDescriptor.TYPE_FIXED32: encoder.Fixed32Sizer,
_FieldDescriptor.TYPE_BOOL: encoder.BoolSizer,
_FieldDescriptor.TYPE_STRING: encoder.StringSizer,
_FieldDescriptor.TYPE_GROUP: encoder.GroupSizer,
_FieldDescriptor.TYPE_MESSAGE: encoder.MessageSizer,
_FieldDescriptor.TYPE_BYTES: encoder.BytesSizer,
_FieldDescriptor.TYPE_UINT32: encoder.UInt32Sizer,
_FieldDescriptor.TYPE_ENUM: encoder.EnumSizer,
_FieldDescriptor.TYPE_SFIXED32: encoder.SFixed32Sizer,
_FieldDescriptor.TYPE_SFIXED64: encoder.SFixed64Sizer,
_FieldDescriptor.TYPE_SINT32: encoder.SInt32Sizer,
_FieldDescriptor.TYPE_SINT64: encoder.SInt64Sizer,
}
# Maps from field type to a decoder constructor.
TYPE_TO_DECODER = {
_FieldDescriptor.TYPE_DOUBLE: decoder.DoubleDecoder,
_FieldDescriptor.TYPE_FLOAT: decoder.FloatDecoder,
_FieldDescriptor.TYPE_INT64: decoder.Int64Decoder,
_FieldDescriptor.TYPE_UINT64: decoder.UInt64Decoder,
_FieldDescriptor.TYPE_INT32: decoder.Int32Decoder,
_FieldDescriptor.TYPE_FIXED64: decoder.Fixed64Decoder,
_FieldDescriptor.TYPE_FIXED32: decoder.Fixed32Decoder,
_FieldDescriptor.TYPE_BOOL: decoder.BoolDecoder,
_FieldDescriptor.TYPE_STRING: decoder.StringDecoder,
_FieldDescriptor.TYPE_GROUP: decoder.GroupDecoder,
_FieldDescriptor.TYPE_MESSAGE: decoder.MessageDecoder,
_FieldDescriptor.TYPE_BYTES: decoder.BytesDecoder,
_FieldDescriptor.TYPE_UINT32: decoder.UInt32Decoder,
_FieldDescriptor.TYPE_ENUM: decoder.EnumDecoder,
_FieldDescriptor.TYPE_SFIXED32: decoder.SFixed32Decoder,
_FieldDescriptor.TYPE_SFIXED64: decoder.SFixed64Decoder,
_FieldDescriptor.TYPE_SINT32: decoder.SInt32Decoder,
_FieldDescriptor.TYPE_SINT64: decoder.SInt64Decoder,
}
# Maps from field type to expected wiretype.
FIELD_TYPE_TO_WIRE_TYPE = {
_FieldDescriptor.TYPE_DOUBLE: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FLOAT: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_INT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_UINT64: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_INT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_FIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_FIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_BOOL: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_STRING:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_GROUP: wire_format.WIRETYPE_START_GROUP,
_FieldDescriptor.TYPE_MESSAGE:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_BYTES:
wire_format.WIRETYPE_LENGTH_DELIMITED,
_FieldDescriptor.TYPE_UINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_ENUM: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SFIXED32: wire_format.WIRETYPE_FIXED32,
_FieldDescriptor.TYPE_SFIXED64: wire_format.WIRETYPE_FIXED64,
_FieldDescriptor.TYPE_SINT32: wire_format.WIRETYPE_VARINT,
_FieldDescriptor.TYPE_SINT64: wire_format.WIRETYPE_VARINT,
}

View file

@ -0,0 +1,878 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains well known classes.
This files defines well known classes which need extra maintenance including:
- Any
- Duration
- FieldMask
- Struct
- Timestamp
"""
__author__ = 'jieluo@google.com (Jie Luo)'
import calendar
import collections.abc
import datetime
from google.protobuf.descriptor import FieldDescriptor
_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S'
_NANOS_PER_SECOND = 1000000000
_NANOS_PER_MILLISECOND = 1000000
_NANOS_PER_MICROSECOND = 1000
_MILLIS_PER_SECOND = 1000
_MICROS_PER_SECOND = 1000000
_SECONDS_PER_DAY = 24 * 3600
_DURATION_SECONDS_MAX = 315576000000
class Any(object):
"""Class for Any Message type."""
__slots__ = ()
def Pack(self, msg, type_url_prefix='type.googleapis.com/',
deterministic=None):
"""Packs the specified message into current Any message."""
if len(type_url_prefix) < 1 or type_url_prefix[-1] != '/':
self.type_url = '%s/%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
else:
self.type_url = '%s%s' % (type_url_prefix, msg.DESCRIPTOR.full_name)
self.value = msg.SerializeToString(deterministic=deterministic)
def Unpack(self, msg):
"""Unpacks the current Any message into specified message."""
descriptor = msg.DESCRIPTOR
if not self.Is(descriptor):
return False
msg.ParseFromString(self.value)
return True
def TypeName(self):
"""Returns the protobuf type name of the inner message."""
# Only last part is to be used: b/25630112
return self.type_url.split('/')[-1]
def Is(self, descriptor):
"""Checks if this Any represents the given protobuf type."""
return '/' in self.type_url and self.TypeName() == descriptor.full_name
_EPOCH_DATETIME_NAIVE = datetime.datetime.utcfromtimestamp(0)
_EPOCH_DATETIME_AWARE = datetime.datetime.fromtimestamp(
0, tz=datetime.timezone.utc)
class Timestamp(object):
"""Class for Timestamp message type."""
__slots__ = ()
def ToJsonString(self):
"""Converts Timestamp to RFC 3339 date string format.
Returns:
A string converted from timestamp. The string is always Z-normalized
and uses 3, 6 or 9 fractional digits as required to represent the
exact time. Example of the return format: '1972-01-01T10:00:20.021Z'
"""
nanos = self.nanos % _NANOS_PER_SECOND
total_sec = self.seconds + (self.nanos - nanos) // _NANOS_PER_SECOND
seconds = total_sec % _SECONDS_PER_DAY
days = (total_sec - seconds) // _SECONDS_PER_DAY
dt = datetime.datetime(1970, 1, 1) + datetime.timedelta(days, seconds)
result = dt.isoformat()
if (nanos % 1e9) == 0:
# If there are 0 fractional digits, the fractional
# point '.' should be omitted when serializing.
return result + 'Z'
if (nanos % 1e6) == 0:
# Serialize 3 fractional digits.
return result + '.%03dZ' % (nanos / 1e6)
if (nanos % 1e3) == 0:
# Serialize 6 fractional digits.
return result + '.%06dZ' % (nanos / 1e3)
# Serialize 9 fractional digits.
return result + '.%09dZ' % nanos
def FromJsonString(self, value):
"""Parse a RFC 3339 date string format to Timestamp.
Args:
value: A date string. Any fractional digits (or none) and any offset are
accepted as long as they fit into nano-seconds precision.
Example of accepted format: '1972-01-01T10:00:20.021-05:00'
Raises:
ValueError: On parsing problems.
"""
if not isinstance(value, str):
raise ValueError('Timestamp JSON value not a string: {!r}'.format(value))
timezone_offset = value.find('Z')
if timezone_offset == -1:
timezone_offset = value.find('+')
if timezone_offset == -1:
timezone_offset = value.rfind('-')
if timezone_offset == -1:
raise ValueError(
'Failed to parse timestamp: missing valid timezone offset.')
time_value = value[0:timezone_offset]
# Parse datetime and nanos.
point_position = time_value.find('.')
if point_position == -1:
second_value = time_value
nano_value = ''
else:
second_value = time_value[:point_position]
nano_value = time_value[point_position + 1:]
if 't' in second_value:
raise ValueError(
'time data \'{0}\' does not match format \'%Y-%m-%dT%H:%M:%S\', '
'lowercase \'t\' is not accepted'.format(second_value))
date_object = datetime.datetime.strptime(second_value, _TIMESTAMPFOMAT)
td = date_object - datetime.datetime(1970, 1, 1)
seconds = td.seconds + td.days * _SECONDS_PER_DAY
if len(nano_value) > 9:
raise ValueError(
'Failed to parse Timestamp: nanos {0} more than '
'9 fractional digits.'.format(nano_value))
if nano_value:
nanos = round(float('0.' + nano_value) * 1e9)
else:
nanos = 0
# Parse timezone offsets.
if value[timezone_offset] == 'Z':
if len(value) != timezone_offset + 1:
raise ValueError('Failed to parse timestamp: invalid trailing'
' data {0}.'.format(value))
else:
timezone = value[timezone_offset:]
pos = timezone.find(':')
if pos == -1:
raise ValueError(
'Invalid timezone offset value: {0}.'.format(timezone))
if timezone[0] == '+':
seconds -= (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60
else:
seconds += (int(timezone[1:pos])*60+int(timezone[pos+1:]))*60
# Set seconds and nanos
self.seconds = int(seconds)
self.nanos = int(nanos)
def GetCurrentTime(self):
"""Get the current UTC into Timestamp."""
self.FromDatetime(datetime.datetime.utcnow())
def ToNanoseconds(self):
"""Converts Timestamp to nanoseconds since epoch."""
return self.seconds * _NANOS_PER_SECOND + self.nanos
def ToMicroseconds(self):
"""Converts Timestamp to microseconds since epoch."""
return (self.seconds * _MICROS_PER_SECOND +
self.nanos // _NANOS_PER_MICROSECOND)
def ToMilliseconds(self):
"""Converts Timestamp to milliseconds since epoch."""
return (self.seconds * _MILLIS_PER_SECOND +
self.nanos // _NANOS_PER_MILLISECOND)
def ToSeconds(self):
"""Converts Timestamp to seconds since epoch."""
return self.seconds
def FromNanoseconds(self, nanos):
"""Converts nanoseconds since epoch to Timestamp."""
self.seconds = nanos // _NANOS_PER_SECOND
self.nanos = nanos % _NANOS_PER_SECOND
def FromMicroseconds(self, micros):
"""Converts microseconds since epoch to Timestamp."""
self.seconds = micros // _MICROS_PER_SECOND
self.nanos = (micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND
def FromMilliseconds(self, millis):
"""Converts milliseconds since epoch to Timestamp."""
self.seconds = millis // _MILLIS_PER_SECOND
self.nanos = (millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND
def FromSeconds(self, seconds):
"""Converts seconds since epoch to Timestamp."""
self.seconds = seconds
self.nanos = 0
def ToDatetime(self, tzinfo=None):
"""Converts Timestamp to a datetime.
Args:
tzinfo: A datetime.tzinfo subclass; defaults to None.
Returns:
If tzinfo is None, returns a timezone-naive UTC datetime (with no timezone
information, i.e. not aware that it's UTC).
Otherwise, returns a timezone-aware datetime in the input timezone.
"""
delta = datetime.timedelta(
seconds=self.seconds,
microseconds=_RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND))
if tzinfo is None:
return _EPOCH_DATETIME_NAIVE + delta
else:
return _EPOCH_DATETIME_AWARE.astimezone(tzinfo) + delta
def FromDatetime(self, dt):
"""Converts datetime to Timestamp.
Args:
dt: A datetime. If it's timezone-naive, it's assumed to be in UTC.
"""
# Using this guide: http://wiki.python.org/moin/WorkingWithTime
# And this conversion guide: http://docs.python.org/library/time.html
# Turn the date parameter into a tuple (struct_time) that can then be
# manipulated into a long value of seconds. During the conversion from
# struct_time to long, the source date in UTC, and so it follows that the
# correct transformation is calendar.timegm()
self.seconds = calendar.timegm(dt.utctimetuple())
self.nanos = dt.microsecond * _NANOS_PER_MICROSECOND
class Duration(object):
"""Class for Duration message type."""
__slots__ = ()
def ToJsonString(self):
"""Converts Duration to string format.
Returns:
A string converted from self. The string format will contains
3, 6, or 9 fractional digits depending on the precision required to
represent the exact Duration value. For example: "1s", "1.010s",
"1.000000100s", "-3.100s"
"""
_CheckDurationValid(self.seconds, self.nanos)
if self.seconds < 0 or self.nanos < 0:
result = '-'
seconds = - self.seconds + int((0 - self.nanos) // 1e9)
nanos = (0 - self.nanos) % 1e9
else:
result = ''
seconds = self.seconds + int(self.nanos // 1e9)
nanos = self.nanos % 1e9
result += '%d' % seconds
if (nanos % 1e9) == 0:
# If there are 0 fractional digits, the fractional
# point '.' should be omitted when serializing.
return result + 's'
if (nanos % 1e6) == 0:
# Serialize 3 fractional digits.
return result + '.%03ds' % (nanos / 1e6)
if (nanos % 1e3) == 0:
# Serialize 6 fractional digits.
return result + '.%06ds' % (nanos / 1e3)
# Serialize 9 fractional digits.
return result + '.%09ds' % nanos
def FromJsonString(self, value):
"""Converts a string to Duration.
Args:
value: A string to be converted. The string must end with 's'. Any
fractional digits (or none) are accepted as long as they fit into
precision. For example: "1s", "1.01s", "1.0000001s", "-3.100s
Raises:
ValueError: On parsing problems.
"""
if not isinstance(value, str):
raise ValueError('Duration JSON value not a string: {!r}'.format(value))
if len(value) < 1 or value[-1] != 's':
raise ValueError(
'Duration must end with letter "s": {0}.'.format(value))
try:
pos = value.find('.')
if pos == -1:
seconds = int(value[:-1])
nanos = 0
else:
seconds = int(value[:pos])
if value[0] == '-':
nanos = int(round(float('-0{0}'.format(value[pos: -1])) *1e9))
else:
nanos = int(round(float('0{0}'.format(value[pos: -1])) *1e9))
_CheckDurationValid(seconds, nanos)
self.seconds = seconds
self.nanos = nanos
except ValueError as e:
raise ValueError(
'Couldn\'t parse duration: {0} : {1}.'.format(value, e))
def ToNanoseconds(self):
"""Converts a Duration to nanoseconds."""
return self.seconds * _NANOS_PER_SECOND + self.nanos
def ToMicroseconds(self):
"""Converts a Duration to microseconds."""
micros = _RoundTowardZero(self.nanos, _NANOS_PER_MICROSECOND)
return self.seconds * _MICROS_PER_SECOND + micros
def ToMilliseconds(self):
"""Converts a Duration to milliseconds."""
millis = _RoundTowardZero(self.nanos, _NANOS_PER_MILLISECOND)
return self.seconds * _MILLIS_PER_SECOND + millis
def ToSeconds(self):
"""Converts a Duration to seconds."""
return self.seconds
def FromNanoseconds(self, nanos):
"""Converts nanoseconds to Duration."""
self._NormalizeDuration(nanos // _NANOS_PER_SECOND,
nanos % _NANOS_PER_SECOND)
def FromMicroseconds(self, micros):
"""Converts microseconds to Duration."""
self._NormalizeDuration(
micros // _MICROS_PER_SECOND,
(micros % _MICROS_PER_SECOND) * _NANOS_PER_MICROSECOND)
def FromMilliseconds(self, millis):
"""Converts milliseconds to Duration."""
self._NormalizeDuration(
millis // _MILLIS_PER_SECOND,
(millis % _MILLIS_PER_SECOND) * _NANOS_PER_MILLISECOND)
def FromSeconds(self, seconds):
"""Converts seconds to Duration."""
self.seconds = seconds
self.nanos = 0
def ToTimedelta(self):
"""Converts Duration to timedelta."""
return datetime.timedelta(
seconds=self.seconds, microseconds=_RoundTowardZero(
self.nanos, _NANOS_PER_MICROSECOND))
def FromTimedelta(self, td):
"""Converts timedelta to Duration."""
self._NormalizeDuration(td.seconds + td.days * _SECONDS_PER_DAY,
td.microseconds * _NANOS_PER_MICROSECOND)
def _NormalizeDuration(self, seconds, nanos):
"""Set Duration by seconds and nanos."""
# Force nanos to be negative if the duration is negative.
if seconds < 0 and nanos > 0:
seconds += 1
nanos -= _NANOS_PER_SECOND
self.seconds = seconds
self.nanos = nanos
def _CheckDurationValid(seconds, nanos):
if seconds < -_DURATION_SECONDS_MAX or seconds > _DURATION_SECONDS_MAX:
raise ValueError(
'Duration is not valid: Seconds {0} must be in range '
'[-315576000000, 315576000000].'.format(seconds))
if nanos <= -_NANOS_PER_SECOND or nanos >= _NANOS_PER_SECOND:
raise ValueError(
'Duration is not valid: Nanos {0} must be in range '
'[-999999999, 999999999].'.format(nanos))
if (nanos < 0 and seconds > 0) or (nanos > 0 and seconds < 0):
raise ValueError(
'Duration is not valid: Sign mismatch.')
def _RoundTowardZero(value, divider):
"""Truncates the remainder part after division."""
# For some languages, the sign of the remainder is implementation
# dependent if any of the operands is negative. Here we enforce
# "rounded toward zero" semantics. For example, for (-5) / 2 an
# implementation may give -3 as the result with the remainder being
# 1. This function ensures we always return -2 (closer to zero).
result = value // divider
remainder = value % divider
if result < 0 and remainder > 0:
return result + 1
else:
return result
class FieldMask(object):
"""Class for FieldMask message type."""
__slots__ = ()
def ToJsonString(self):
"""Converts FieldMask to string according to proto3 JSON spec."""
camelcase_paths = []
for path in self.paths:
camelcase_paths.append(_SnakeCaseToCamelCase(path))
return ','.join(camelcase_paths)
def FromJsonString(self, value):
"""Converts string to FieldMask according to proto3 JSON spec."""
if not isinstance(value, str):
raise ValueError('FieldMask JSON value not a string: {!r}'.format(value))
self.Clear()
if value:
for path in value.split(','):
self.paths.append(_CamelCaseToSnakeCase(path))
def IsValidForDescriptor(self, message_descriptor):
"""Checks whether the FieldMask is valid for Message Descriptor."""
for path in self.paths:
if not _IsValidPath(message_descriptor, path):
return False
return True
def AllFieldsFromDescriptor(self, message_descriptor):
"""Gets all direct fields of Message Descriptor to FieldMask."""
self.Clear()
for field in message_descriptor.fields:
self.paths.append(field.name)
def CanonicalFormFromMask(self, mask):
"""Converts a FieldMask to the canonical form.
Removes paths that are covered by another path. For example,
"foo.bar" is covered by "foo" and will be removed if "foo"
is also in the FieldMask. Then sorts all paths in alphabetical order.
Args:
mask: The original FieldMask to be converted.
"""
tree = _FieldMaskTree(mask)
tree.ToFieldMask(self)
def Union(self, mask1, mask2):
"""Merges mask1 and mask2 into this FieldMask."""
_CheckFieldMaskMessage(mask1)
_CheckFieldMaskMessage(mask2)
tree = _FieldMaskTree(mask1)
tree.MergeFromFieldMask(mask2)
tree.ToFieldMask(self)
def Intersect(self, mask1, mask2):
"""Intersects mask1 and mask2 into this FieldMask."""
_CheckFieldMaskMessage(mask1)
_CheckFieldMaskMessage(mask2)
tree = _FieldMaskTree(mask1)
intersection = _FieldMaskTree()
for path in mask2.paths:
tree.IntersectPath(path, intersection)
intersection.ToFieldMask(self)
def MergeMessage(
self, source, destination,
replace_message_field=False, replace_repeated_field=False):
"""Merges fields specified in FieldMask from source to destination.
Args:
source: Source message.
destination: The destination message to be merged into.
replace_message_field: Replace message field if True. Merge message
field if False.
replace_repeated_field: Replace repeated field if True. Append
elements of repeated field if False.
"""
tree = _FieldMaskTree(self)
tree.MergeMessage(
source, destination, replace_message_field, replace_repeated_field)
def _IsValidPath(message_descriptor, path):
"""Checks whether the path is valid for Message Descriptor."""
parts = path.split('.')
last = parts.pop()
for name in parts:
field = message_descriptor.fields_by_name.get(name)
if (field is None or
field.label == FieldDescriptor.LABEL_REPEATED or
field.type != FieldDescriptor.TYPE_MESSAGE):
return False
message_descriptor = field.message_type
return last in message_descriptor.fields_by_name
def _CheckFieldMaskMessage(message):
"""Raises ValueError if message is not a FieldMask."""
message_descriptor = message.DESCRIPTOR
if (message_descriptor.name != 'FieldMask' or
message_descriptor.file.name != 'google/protobuf/field_mask.proto'):
raise ValueError('Message {0} is not a FieldMask.'.format(
message_descriptor.full_name))
def _SnakeCaseToCamelCase(path_name):
"""Converts a path name from snake_case to camelCase."""
result = []
after_underscore = False
for c in path_name:
if c.isupper():
raise ValueError(
'Fail to print FieldMask to Json string: Path name '
'{0} must not contain uppercase letters.'.format(path_name))
if after_underscore:
if c.islower():
result.append(c.upper())
after_underscore = False
else:
raise ValueError(
'Fail to print FieldMask to Json string: The '
'character after a "_" must be a lowercase letter '
'in path name {0}.'.format(path_name))
elif c == '_':
after_underscore = True
else:
result += c
if after_underscore:
raise ValueError('Fail to print FieldMask to Json string: Trailing "_" '
'in path name {0}.'.format(path_name))
return ''.join(result)
def _CamelCaseToSnakeCase(path_name):
"""Converts a field name from camelCase to snake_case."""
result = []
for c in path_name:
if c == '_':
raise ValueError('Fail to parse FieldMask: Path name '
'{0} must not contain "_"s.'.format(path_name))
if c.isupper():
result += '_'
result += c.lower()
else:
result += c
return ''.join(result)
class _FieldMaskTree(object):
"""Represents a FieldMask in a tree structure.
For example, given a FieldMask "foo.bar,foo.baz,bar.baz",
the FieldMaskTree will be:
[_root] -+- foo -+- bar
| |
| +- baz
|
+- bar --- baz
In the tree, each leaf node represents a field path.
"""
__slots__ = ('_root',)
def __init__(self, field_mask=None):
"""Initializes the tree by FieldMask."""
self._root = {}
if field_mask:
self.MergeFromFieldMask(field_mask)
def MergeFromFieldMask(self, field_mask):
"""Merges a FieldMask to the tree."""
for path in field_mask.paths:
self.AddPath(path)
def AddPath(self, path):
"""Adds a field path into the tree.
If the field path to add is a sub-path of an existing field path
in the tree (i.e., a leaf node), it means the tree already matches
the given path so nothing will be added to the tree. If the path
matches an existing non-leaf node in the tree, that non-leaf node
will be turned into a leaf node with all its children removed because
the path matches all the node's children. Otherwise, a new path will
be added.
Args:
path: The field path to add.
"""
node = self._root
for name in path.split('.'):
if name not in node:
node[name] = {}
elif not node[name]:
# Pre-existing empty node implies we already have this entire tree.
return
node = node[name]
# Remove any sub-trees we might have had.
node.clear()
def ToFieldMask(self, field_mask):
"""Converts the tree to a FieldMask."""
field_mask.Clear()
_AddFieldPaths(self._root, '', field_mask)
def IntersectPath(self, path, intersection):
"""Calculates the intersection part of a field path with this tree.
Args:
path: The field path to calculates.
intersection: The out tree to record the intersection part.
"""
node = self._root
for name in path.split('.'):
if name not in node:
return
elif not node[name]:
intersection.AddPath(path)
return
node = node[name]
intersection.AddLeafNodes(path, node)
def AddLeafNodes(self, prefix, node):
"""Adds leaf nodes begin with prefix to this tree."""
if not node:
self.AddPath(prefix)
for name in node:
child_path = prefix + '.' + name
self.AddLeafNodes(child_path, node[name])
def MergeMessage(
self, source, destination,
replace_message, replace_repeated):
"""Merge all fields specified by this tree from source to destination."""
_MergeMessage(
self._root, source, destination, replace_message, replace_repeated)
def _StrConvert(value):
"""Converts value to str if it is not."""
# This file is imported by c extension and some methods like ClearField
# requires string for the field name. py2/py3 has different text
# type and may use unicode.
if not isinstance(value, str):
return value.encode('utf-8')
return value
def _MergeMessage(
node, source, destination, replace_message, replace_repeated):
"""Merge all fields specified by a sub-tree from source to destination."""
source_descriptor = source.DESCRIPTOR
for name in node:
child = node[name]
field = source_descriptor.fields_by_name[name]
if field is None:
raise ValueError('Error: Can\'t find field {0} in message {1}.'.format(
name, source_descriptor.full_name))
if child:
# Sub-paths are only allowed for singular message fields.
if (field.label == FieldDescriptor.LABEL_REPEATED or
field.cpp_type != FieldDescriptor.CPPTYPE_MESSAGE):
raise ValueError('Error: Field {0} in message {1} is not a singular '
'message field and cannot have sub-fields.'.format(
name, source_descriptor.full_name))
if source.HasField(name):
_MergeMessage(
child, getattr(source, name), getattr(destination, name),
replace_message, replace_repeated)
continue
if field.label == FieldDescriptor.LABEL_REPEATED:
if replace_repeated:
destination.ClearField(_StrConvert(name))
repeated_source = getattr(source, name)
repeated_destination = getattr(destination, name)
repeated_destination.MergeFrom(repeated_source)
else:
if field.cpp_type == FieldDescriptor.CPPTYPE_MESSAGE:
if replace_message:
destination.ClearField(_StrConvert(name))
if source.HasField(name):
getattr(destination, name).MergeFrom(getattr(source, name))
else:
setattr(destination, name, getattr(source, name))
def _AddFieldPaths(node, prefix, field_mask):
"""Adds the field paths descended from node to field_mask."""
if not node and prefix:
field_mask.paths.append(prefix)
return
for name in sorted(node):
if prefix:
child_path = prefix + '.' + name
else:
child_path = name
_AddFieldPaths(node[name], child_path, field_mask)
def _SetStructValue(struct_value, value):
if value is None:
struct_value.null_value = 0
elif isinstance(value, bool):
# Note: this check must come before the number check because in Python
# True and False are also considered numbers.
struct_value.bool_value = value
elif isinstance(value, str):
struct_value.string_value = value
elif isinstance(value, (int, float)):
struct_value.number_value = value
elif isinstance(value, (dict, Struct)):
struct_value.struct_value.Clear()
struct_value.struct_value.update(value)
elif isinstance(value, (list, ListValue)):
struct_value.list_value.Clear()
struct_value.list_value.extend(value)
else:
raise ValueError('Unexpected type')
def _GetStructValue(struct_value):
which = struct_value.WhichOneof('kind')
if which == 'struct_value':
return struct_value.struct_value
elif which == 'null_value':
return None
elif which == 'number_value':
return struct_value.number_value
elif which == 'string_value':
return struct_value.string_value
elif which == 'bool_value':
return struct_value.bool_value
elif which == 'list_value':
return struct_value.list_value
elif which is None:
raise ValueError('Value not set')
class Struct(object):
"""Class for Struct message type."""
__slots__ = ()
def __getitem__(self, key):
return _GetStructValue(self.fields[key])
def __contains__(self, item):
return item in self.fields
def __setitem__(self, key, value):
_SetStructValue(self.fields[key], value)
def __delitem__(self, key):
del self.fields[key]
def __len__(self):
return len(self.fields)
def __iter__(self):
return iter(self.fields)
def keys(self): # pylint: disable=invalid-name
return self.fields.keys()
def values(self): # pylint: disable=invalid-name
return [self[key] for key in self]
def items(self): # pylint: disable=invalid-name
return [(key, self[key]) for key in self]
def get_or_create_list(self, key):
"""Returns a list for this key, creating if it didn't exist already."""
if not self.fields[key].HasField('list_value'):
# Clear will mark list_value modified which will indeed create a list.
self.fields[key].list_value.Clear()
return self.fields[key].list_value
def get_or_create_struct(self, key):
"""Returns a struct for this key, creating if it didn't exist already."""
if not self.fields[key].HasField('struct_value'):
# Clear will mark struct_value modified which will indeed create a struct.
self.fields[key].struct_value.Clear()
return self.fields[key].struct_value
def update(self, dictionary): # pylint: disable=invalid-name
for key, value in dictionary.items():
_SetStructValue(self.fields[key], value)
collections.abc.MutableMapping.register(Struct)
class ListValue(object):
"""Class for ListValue message type."""
__slots__ = ()
def __len__(self):
return len(self.values)
def append(self, value):
_SetStructValue(self.values.add(), value)
def extend(self, elem_seq):
for value in elem_seq:
self.append(value)
def __getitem__(self, index):
"""Retrieves item by the specified index."""
return _GetStructValue(self.values.__getitem__(index))
def __setitem__(self, index, value):
_SetStructValue(self.values.__getitem__(index), value)
def __delitem__(self, key):
del self.values[key]
def items(self):
for i in range(len(self)):
yield self[i]
def add_struct(self):
"""Appends and returns a struct value as the next value in the list."""
struct_value = self.values.add().struct_value
# Clear will mark struct_value modified which will indeed create a struct.
struct_value.Clear()
return struct_value
def add_list(self):
"""Appends and returns a list value as the next value in the list."""
list_value = self.values.add().list_value
# Clear will mark list_value modified which will indeed create a list.
list_value.Clear()
return list_value
collections.abc.MutableSequence.register(ListValue)
WKTBASES = {
'google.protobuf.Any': Any,
'google.protobuf.Duration': Duration,
'google.protobuf.FieldMask': FieldMask,
'google.protobuf.ListValue': ListValue,
'google.protobuf.Struct': Struct,
'google.protobuf.Timestamp': Timestamp,
}

View file

@ -0,0 +1,268 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Constants and static functions to support protocol buffer wire format."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import descriptor
from google.protobuf import message
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
# These numbers identify the wire type of a protocol buffer value.
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
# tag-and-type to store one of these WIRETYPE_* constants.
# These values must match WireType enum in google/protobuf/wire_format.h.
WIRETYPE_VARINT = 0
WIRETYPE_FIXED64 = 1
WIRETYPE_LENGTH_DELIMITED = 2
WIRETYPE_START_GROUP = 3
WIRETYPE_END_GROUP = 4
WIRETYPE_FIXED32 = 5
_WIRETYPE_MAX = 5
# Bounds for various integer types.
INT32_MAX = int((1 << 31) - 1)
INT32_MIN = int(-(1 << 31))
UINT32_MAX = (1 << 32) - 1
INT64_MAX = (1 << 63) - 1
INT64_MIN = -(1 << 63)
UINT64_MAX = (1 << 64) - 1
# "struct" format strings that will encode/decode the specified formats.
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
FORMAT_FLOAT_LITTLE_ENDIAN = '<f'
FORMAT_DOUBLE_LITTLE_ENDIAN = '<d'
# We'll have to provide alternate implementations of AppendLittleEndian*() on
# any architectures where these checks fail.
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
raise AssertionError('Format "I" is not a 32-bit number.')
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
raise AssertionError('Format "Q" is not a 64-bit number.')
def PackTag(field_number, wire_type):
"""Returns an unsigned 32-bit integer that encodes the field number and
wire type information in standard protocol message wire format.
Args:
field_number: Expected to be an integer in the range [1, 1 << 29)
wire_type: One of the WIRETYPE_* constants.
"""
if not 0 <= wire_type <= _WIRETYPE_MAX:
raise message.EncodeError('Unknown wire type: %d' % wire_type)
return (field_number << TAG_TYPE_BITS) | wire_type
def UnpackTag(tag):
"""The inverse of PackTag(). Given an unsigned 32-bit number,
returns a (field_number, wire_type) tuple.
"""
return (tag >> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK)
def ZigZagEncode(value):
"""ZigZag Transform: Encodes signed integers so that they can be
effectively used with varint encoding. See wire_format.h for
more details.
"""
if value >= 0:
return value << 1
return (value << 1) ^ (~0)
def ZigZagDecode(value):
"""Inverse of ZigZagEncode()."""
if not value & 0x1:
return value >> 1
return (value >> 1) ^ (~0)
# The *ByteSize() functions below return the number of bytes required to
# serialize "field number + type" information and then serialize the value.
def Int32ByteSize(field_number, int32):
return Int64ByteSize(field_number, int32)
def Int32ByteSizeNoTag(int32):
return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32)
def Int64ByteSize(field_number, int64):
# Have to convert to uint before calling UInt64ByteSize().
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
def UInt32ByteSize(field_number, uint32):
return UInt64ByteSize(field_number, uint32)
def UInt64ByteSize(field_number, uint64):
return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
def SInt32ByteSize(field_number, int32):
return UInt32ByteSize(field_number, ZigZagEncode(int32))
def SInt64ByteSize(field_number, int64):
return UInt64ByteSize(field_number, ZigZagEncode(int64))
def Fixed32ByteSize(field_number, fixed32):
return TagByteSize(field_number) + 4
def Fixed64ByteSize(field_number, fixed64):
return TagByteSize(field_number) + 8
def SFixed32ByteSize(field_number, sfixed32):
return TagByteSize(field_number) + 4
def SFixed64ByteSize(field_number, sfixed64):
return TagByteSize(field_number) + 8
def FloatByteSize(field_number, flt):
return TagByteSize(field_number) + 4
def DoubleByteSize(field_number, double):
return TagByteSize(field_number) + 8
def BoolByteSize(field_number, b):
return TagByteSize(field_number) + 1
def EnumByteSize(field_number, enum):
return UInt32ByteSize(field_number, enum)
def StringByteSize(field_number, string):
return BytesByteSize(field_number, string.encode('utf-8'))
def BytesByteSize(field_number, b):
return (TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(len(b))
+ len(b))
def GroupByteSize(field_number, message):
return (2 * TagByteSize(field_number) # START and END group.
+ message.ByteSize())
def MessageByteSize(field_number, message):
return (TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
+ message.ByteSize())
def MessageSetItemByteSize(field_number, msg):
# First compute the sizes of the tags.
# There are 2 tags for the beginning and ending of the repeated group, that
# is field number 1, one with field number 2 (type_id) and one with field
# number 3 (message).
total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3))
# Add the number of bytes for type_id.
total_size += _VarUInt64ByteSizeNoTag(field_number)
message_size = msg.ByteSize()
# The number of bytes for encoding the length of the message.
total_size += _VarUInt64ByteSizeNoTag(message_size)
# The size of the message.
total_size += message_size
return total_size
def TagByteSize(field_number):
"""Returns the bytes required to serialize a tag with this field number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
# Private helper function for the *ByteSize() functions above.
def _VarUInt64ByteSizeNoTag(uint64):
"""Returns the number of bytes required to serialize a single varint
using boundary value comparisons. (unrolled loop optimization -WPierce)
uint64 must be unsigned.
"""
if uint64 <= 0x7f: return 1
if uint64 <= 0x3fff: return 2
if uint64 <= 0x1fffff: return 3
if uint64 <= 0xfffffff: return 4
if uint64 <= 0x7ffffffff: return 5
if uint64 <= 0x3ffffffffff: return 6
if uint64 <= 0x1ffffffffffff: return 7
if uint64 <= 0xffffffffffffff: return 8
if uint64 <= 0x7fffffffffffffff: return 9
if uint64 > UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % uint64)
return 10
NON_PACKABLE_TYPES = (
descriptor.FieldDescriptor.TYPE_STRING,
descriptor.FieldDescriptor.TYPE_GROUP,
descriptor.FieldDescriptor.TYPE_MESSAGE,
descriptor.FieldDescriptor.TYPE_BYTES
)
def IsTypePackable(field_type):
"""Return true iff packable = true is valid for fields of this type.
Args:
field_type: a FieldDescriptor::Type value.
Returns:
True iff fields of this type are packable.
"""
return field_type not in NON_PACKABLE_TYPES

View file

@ -0,0 +1,912 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains routines for printing protocol messages in JSON format.
Simple usage example:
# Create a proto object and serialize it to a json format string.
message = my_proto_pb2.MyMessage(foo='bar')
json_string = json_format.MessageToJson(message)
# Parse a json format string to proto object.
message = json_format.Parse(json_string, my_proto_pb2.MyMessage())
"""
__author__ = 'jieluo@google.com (Jie Luo)'
import base64
from collections import OrderedDict
import json
import math
from operator import methodcaller
import re
import sys
from google.protobuf.internal import type_checkers
from google.protobuf import descriptor
from google.protobuf import symbol_database
_TIMESTAMPFOMAT = '%Y-%m-%dT%H:%M:%S'
_INT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT32,
descriptor.FieldDescriptor.CPPTYPE_UINT32,
descriptor.FieldDescriptor.CPPTYPE_INT64,
descriptor.FieldDescriptor.CPPTYPE_UINT64])
_INT64_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_INT64,
descriptor.FieldDescriptor.CPPTYPE_UINT64])
_FLOAT_TYPES = frozenset([descriptor.FieldDescriptor.CPPTYPE_FLOAT,
descriptor.FieldDescriptor.CPPTYPE_DOUBLE])
_INFINITY = 'Infinity'
_NEG_INFINITY = '-Infinity'
_NAN = 'NaN'
_UNPAIRED_SURROGATE_PATTERN = re.compile(
u'[\ud800-\udbff](?![\udc00-\udfff])|(?<![\ud800-\udbff])[\udc00-\udfff]')
_VALID_EXTENSION_NAME = re.compile(r'\[[a-zA-Z0-9\._]*\]$')
class Error(Exception):
"""Top-level module error for json_format."""
class SerializeToJsonError(Error):
"""Thrown if serialization to JSON fails."""
class ParseError(Error):
"""Thrown in case of parsing error."""
def MessageToJson(
message,
including_default_value_fields=False,
preserving_proto_field_name=False,
indent=2,
sort_keys=False,
use_integers_for_enums=False,
descriptor_pool=None,
float_precision=None,
ensure_ascii=True):
"""Converts protobuf message to JSON format.
Args:
message: The protocol buffers message instance to serialize.
including_default_value_fields: If True, singular primitive fields,
repeated fields, and map fields will always be serialized. If
False, only serialize non-empty fields. Singular message fields
and oneof fields are not affected by this option.
preserving_proto_field_name: If True, use the original proto field
names as defined in the .proto file. If False, convert the field
names to lowerCamelCase.
indent: The JSON object will be pretty-printed with this indent level.
An indent level of 0 or negative will only insert newlines.
sort_keys: If True, then the output will be sorted by field names.
use_integers_for_enums: If true, print integers instead of enum names.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
float_precision: If set, use this to specify float field valid digits.
ensure_ascii: If True, strings with non-ASCII characters are escaped.
If False, Unicode strings are returned unchanged.
Returns:
A string containing the JSON formatted protocol buffer message.
"""
printer = _Printer(
including_default_value_fields,
preserving_proto_field_name,
use_integers_for_enums,
descriptor_pool,
float_precision=float_precision)
return printer.ToJsonString(message, indent, sort_keys, ensure_ascii)
def MessageToDict(
message,
including_default_value_fields=False,
preserving_proto_field_name=False,
use_integers_for_enums=False,
descriptor_pool=None,
float_precision=None):
"""Converts protobuf message to a dictionary.
When the dictionary is encoded to JSON, it conforms to proto3 JSON spec.
Args:
message: The protocol buffers message instance to serialize.
including_default_value_fields: If True, singular primitive fields,
repeated fields, and map fields will always be serialized. If
False, only serialize non-empty fields. Singular message fields
and oneof fields are not affected by this option.
preserving_proto_field_name: If True, use the original proto field
names as defined in the .proto file. If False, convert the field
names to lowerCamelCase.
use_integers_for_enums: If true, print integers instead of enum names.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
float_precision: If set, use this to specify float field valid digits.
Returns:
A dict representation of the protocol buffer message.
"""
printer = _Printer(
including_default_value_fields,
preserving_proto_field_name,
use_integers_for_enums,
descriptor_pool,
float_precision=float_precision)
# pylint: disable=protected-access
return printer._MessageToJsonObject(message)
def _IsMapEntry(field):
return (field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and
field.message_type.has_options and
field.message_type.GetOptions().map_entry)
class _Printer(object):
"""JSON format printer for protocol message."""
def __init__(
self,
including_default_value_fields=False,
preserving_proto_field_name=False,
use_integers_for_enums=False,
descriptor_pool=None,
float_precision=None):
self.including_default_value_fields = including_default_value_fields
self.preserving_proto_field_name = preserving_proto_field_name
self.use_integers_for_enums = use_integers_for_enums
self.descriptor_pool = descriptor_pool
if float_precision:
self.float_format = '.{}g'.format(float_precision)
else:
self.float_format = None
def ToJsonString(self, message, indent, sort_keys, ensure_ascii):
js = self._MessageToJsonObject(message)
return json.dumps(
js, indent=indent, sort_keys=sort_keys, ensure_ascii=ensure_ascii)
def _MessageToJsonObject(self, message):
"""Converts message to an object according to Proto3 JSON Specification."""
message_descriptor = message.DESCRIPTOR
full_name = message_descriptor.full_name
if _IsWrapperMessage(message_descriptor):
return self._WrapperMessageToJsonObject(message)
if full_name in _WKTJSONMETHODS:
return methodcaller(_WKTJSONMETHODS[full_name][0], message)(self)
js = {}
return self._RegularMessageToJsonObject(message, js)
def _RegularMessageToJsonObject(self, message, js):
"""Converts normal message according to Proto3 JSON Specification."""
fields = message.ListFields()
try:
for field, value in fields:
if self.preserving_proto_field_name:
name = field.name
else:
name = field.json_name
if _IsMapEntry(field):
# Convert a map field.
v_field = field.message_type.fields_by_name['value']
js_map = {}
for key in value:
if isinstance(key, bool):
if key:
recorded_key = 'true'
else:
recorded_key = 'false'
else:
recorded_key = str(key)
js_map[recorded_key] = self._FieldToJsonObject(
v_field, value[key])
js[name] = js_map
elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
# Convert a repeated field.
js[name] = [self._FieldToJsonObject(field, k)
for k in value]
elif field.is_extension:
name = '[%s]' % field.full_name
js[name] = self._FieldToJsonObject(field, value)
else:
js[name] = self._FieldToJsonObject(field, value)
# Serialize default value if including_default_value_fields is True.
if self.including_default_value_fields:
message_descriptor = message.DESCRIPTOR
for field in message_descriptor.fields:
# Singular message fields and oneof fields will not be affected.
if ((field.label != descriptor.FieldDescriptor.LABEL_REPEATED and
field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE) or
field.containing_oneof):
continue
if self.preserving_proto_field_name:
name = field.name
else:
name = field.json_name
if name in js:
# Skip the field which has been serialized already.
continue
if _IsMapEntry(field):
js[name] = {}
elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
js[name] = []
else:
js[name] = self._FieldToJsonObject(field, field.default_value)
except ValueError as e:
raise SerializeToJsonError(
'Failed to serialize {0} field: {1}.'.format(field.name, e))
return js
def _FieldToJsonObject(self, field, value):
"""Converts field value according to Proto3 JSON Specification."""
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
return self._MessageToJsonObject(value)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:
if self.use_integers_for_enums:
return value
if field.enum_type.full_name == 'google.protobuf.NullValue':
return None
enum_value = field.enum_type.values_by_number.get(value, None)
if enum_value is not None:
return enum_value.name
else:
if field.file.syntax == 'proto3':
return value
raise SerializeToJsonError('Enum field contains an integer value '
'which can not mapped to an enum value.')
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:
if field.type == descriptor.FieldDescriptor.TYPE_BYTES:
# Use base64 Data encoding for bytes
return base64.b64encode(value).decode('utf-8')
else:
return value
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:
return bool(value)
elif field.cpp_type in _INT64_TYPES:
return str(value)
elif field.cpp_type in _FLOAT_TYPES:
if math.isinf(value):
if value < 0.0:
return _NEG_INFINITY
else:
return _INFINITY
if math.isnan(value):
return _NAN
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT:
if self.float_format:
return float(format(value, self.float_format))
else:
return type_checkers.ToShortestFloat(value)
return value
def _AnyMessageToJsonObject(self, message):
"""Converts Any message according to Proto3 JSON Specification."""
if not message.ListFields():
return {}
# Must print @type first, use OrderedDict instead of {}
js = OrderedDict()
type_url = message.type_url
js['@type'] = type_url
sub_message = _CreateMessageFromTypeUrl(type_url, self.descriptor_pool)
sub_message.ParseFromString(message.value)
message_descriptor = sub_message.DESCRIPTOR
full_name = message_descriptor.full_name
if _IsWrapperMessage(message_descriptor):
js['value'] = self._WrapperMessageToJsonObject(sub_message)
return js
if full_name in _WKTJSONMETHODS:
js['value'] = methodcaller(_WKTJSONMETHODS[full_name][0],
sub_message)(self)
return js
return self._RegularMessageToJsonObject(sub_message, js)
def _GenericMessageToJsonObject(self, message):
"""Converts message according to Proto3 JSON Specification."""
# Duration, Timestamp and FieldMask have ToJsonString method to do the
# convert. Users can also call the method directly.
return message.ToJsonString()
def _ValueMessageToJsonObject(self, message):
"""Converts Value message according to Proto3 JSON Specification."""
which = message.WhichOneof('kind')
# If the Value message is not set treat as null_value when serialize
# to JSON. The parse back result will be different from original message.
if which is None or which == 'null_value':
return None
if which == 'list_value':
return self._ListValueMessageToJsonObject(message.list_value)
if which == 'struct_value':
value = message.struct_value
else:
value = getattr(message, which)
oneof_descriptor = message.DESCRIPTOR.fields_by_name[which]
return self._FieldToJsonObject(oneof_descriptor, value)
def _ListValueMessageToJsonObject(self, message):
"""Converts ListValue message according to Proto3 JSON Specification."""
return [self._ValueMessageToJsonObject(value)
for value in message.values]
def _StructMessageToJsonObject(self, message):
"""Converts Struct message according to Proto3 JSON Specification."""
fields = message.fields
ret = {}
for key in fields:
ret[key] = self._ValueMessageToJsonObject(fields[key])
return ret
def _WrapperMessageToJsonObject(self, message):
return self._FieldToJsonObject(
message.DESCRIPTOR.fields_by_name['value'], message.value)
def _IsWrapperMessage(message_descriptor):
return message_descriptor.file.name == 'google/protobuf/wrappers.proto'
def _DuplicateChecker(js):
result = {}
for name, value in js:
if name in result:
raise ParseError('Failed to load JSON: duplicate key {0}.'.format(name))
result[name] = value
return result
def _CreateMessageFromTypeUrl(type_url, descriptor_pool):
"""Creates a message from a type URL."""
db = symbol_database.Default()
pool = db.pool if descriptor_pool is None else descriptor_pool
type_name = type_url.split('/')[-1]
try:
message_descriptor = pool.FindMessageTypeByName(type_name)
except KeyError:
raise TypeError(
'Can not find message descriptor by type_url: {0}'.format(type_url))
message_class = db.GetPrototype(message_descriptor)
return message_class()
def Parse(text,
message,
ignore_unknown_fields=False,
descriptor_pool=None,
max_recursion_depth=100):
"""Parses a JSON representation of a protocol message into a message.
Args:
text: Message JSON representation.
message: A protocol buffer message to merge into.
ignore_unknown_fields: If True, do not raise errors for unknown fields.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
max_recursion_depth: max recursion depth of JSON message to be
deserialized. JSON messages over this depth will fail to be
deserialized. Default value is 100.
Returns:
The same message passed as argument.
Raises::
ParseError: On JSON parsing problems.
"""
if not isinstance(text, str):
text = text.decode('utf-8')
try:
js = json.loads(text, object_pairs_hook=_DuplicateChecker)
except ValueError as e:
raise ParseError('Failed to load JSON: {0}.'.format(str(e)))
return ParseDict(js, message, ignore_unknown_fields, descriptor_pool,
max_recursion_depth)
def ParseDict(js_dict,
message,
ignore_unknown_fields=False,
descriptor_pool=None,
max_recursion_depth=100):
"""Parses a JSON dictionary representation into a message.
Args:
js_dict: Dict representation of a JSON message.
message: A protocol buffer message to merge into.
ignore_unknown_fields: If True, do not raise errors for unknown fields.
descriptor_pool: A Descriptor Pool for resolving types. If None use the
default.
max_recursion_depth: max recursion depth of JSON message to be
deserialized. JSON messages over this depth will fail to be
deserialized. Default value is 100.
Returns:
The same message passed as argument.
"""
parser = _Parser(ignore_unknown_fields, descriptor_pool, max_recursion_depth)
parser.ConvertMessage(js_dict, message, '')
return message
_INT_OR_FLOAT = (int, float)
class _Parser(object):
"""JSON format parser for protocol message."""
def __init__(self, ignore_unknown_fields, descriptor_pool,
max_recursion_depth):
self.ignore_unknown_fields = ignore_unknown_fields
self.descriptor_pool = descriptor_pool
self.max_recursion_depth = max_recursion_depth
self.recursion_depth = 0
def ConvertMessage(self, value, message, path):
"""Convert a JSON object into a message.
Args:
value: A JSON object.
message: A WKT or regular protocol message to record the data.
path: parent path to log parse error info.
Raises:
ParseError: In case of convert problems.
"""
self.recursion_depth += 1
if self.recursion_depth > self.max_recursion_depth:
raise ParseError('Message too deep. Max recursion depth is {0}'.format(
self.max_recursion_depth))
message_descriptor = message.DESCRIPTOR
full_name = message_descriptor.full_name
if not path:
path = message_descriptor.name
if _IsWrapperMessage(message_descriptor):
self._ConvertWrapperMessage(value, message, path)
elif full_name in _WKTJSONMETHODS:
methodcaller(_WKTJSONMETHODS[full_name][1], value, message, path)(self)
else:
self._ConvertFieldValuePair(value, message, path)
self.recursion_depth -= 1
def _ConvertFieldValuePair(self, js, message, path):
"""Convert field value pairs into regular message.
Args:
js: A JSON object to convert the field value pairs.
message: A regular protocol message to record the data.
path: parent path to log parse error info.
Raises:
ParseError: In case of problems converting.
"""
names = []
message_descriptor = message.DESCRIPTOR
fields_by_json_name = dict((f.json_name, f)
for f in message_descriptor.fields)
for name in js:
try:
field = fields_by_json_name.get(name, None)
if not field:
field = message_descriptor.fields_by_name.get(name, None)
if not field and _VALID_EXTENSION_NAME.match(name):
if not message_descriptor.is_extendable:
raise ParseError(
'Message type {0} does not have extensions at {1}'.format(
message_descriptor.full_name, path))
identifier = name[1:-1] # strip [] brackets
# pylint: disable=protected-access
field = message.Extensions._FindExtensionByName(identifier)
# pylint: enable=protected-access
if not field:
# Try looking for extension by the message type name, dropping the
# field name following the final . separator in full_name.
identifier = '.'.join(identifier.split('.')[:-1])
# pylint: disable=protected-access
field = message.Extensions._FindExtensionByName(identifier)
# pylint: enable=protected-access
if not field:
if self.ignore_unknown_fields:
continue
raise ParseError(
('Message type "{0}" has no field named "{1}" at "{2}".\n'
' Available Fields(except extensions): "{3}"').format(
message_descriptor.full_name, name, path,
[f.json_name for f in message_descriptor.fields]))
if name in names:
raise ParseError('Message type "{0}" should not have multiple '
'"{1}" fields at "{2}".'.format(
message.DESCRIPTOR.full_name, name, path))
names.append(name)
value = js[name]
# Check no other oneof field is parsed.
if field.containing_oneof is not None and value is not None:
oneof_name = field.containing_oneof.name
if oneof_name in names:
raise ParseError('Message type "{0}" should not have multiple '
'"{1}" oneof fields at "{2}".'.format(
message.DESCRIPTOR.full_name, oneof_name,
path))
names.append(oneof_name)
if value is None:
if (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE
and field.message_type.full_name == 'google.protobuf.Value'):
sub_message = getattr(message, field.name)
sub_message.null_value = 0
elif (field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM
and field.enum_type.full_name == 'google.protobuf.NullValue'):
setattr(message, field.name, 0)
else:
message.ClearField(field.name)
continue
# Parse field value.
if _IsMapEntry(field):
message.ClearField(field.name)
self._ConvertMapFieldValue(value, message, field,
'{0}.{1}'.format(path, name))
elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED:
message.ClearField(field.name)
if not isinstance(value, list):
raise ParseError('repeated field {0} must be in [] which is '
'{1} at {2}'.format(name, value, path))
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
# Repeated message field.
for index, item in enumerate(value):
sub_message = getattr(message, field.name).add()
# None is a null_value in Value.
if (item is None and
sub_message.DESCRIPTOR.full_name != 'google.protobuf.Value'):
raise ParseError('null is not allowed to be used as an element'
' in a repeated field at {0}.{1}[{2}]'.format(
path, name, index))
self.ConvertMessage(item, sub_message,
'{0}.{1}[{2}]'.format(path, name, index))
else:
# Repeated scalar field.
for index, item in enumerate(value):
if item is None:
raise ParseError('null is not allowed to be used as an element'
' in a repeated field at {0}.{1}[{2}]'.format(
path, name, index))
getattr(message, field.name).append(
_ConvertScalarFieldValue(
item, field, '{0}.{1}[{2}]'.format(path, name, index)))
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
if field.is_extension:
sub_message = message.Extensions[field]
else:
sub_message = getattr(message, field.name)
sub_message.SetInParent()
self.ConvertMessage(value, sub_message, '{0}.{1}'.format(path, name))
else:
if field.is_extension:
message.Extensions[field] = _ConvertScalarFieldValue(
value, field, '{0}.{1}'.format(path, name))
else:
setattr(
message, field.name,
_ConvertScalarFieldValue(value, field,
'{0}.{1}'.format(path, name)))
except ParseError as e:
if field and field.containing_oneof is None:
raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
else:
raise ParseError(str(e))
except ValueError as e:
raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
except TypeError as e:
raise ParseError('Failed to parse {0} field: {1}.'.format(name, e))
def _ConvertAnyMessage(self, value, message, path):
"""Convert a JSON representation into Any message."""
if isinstance(value, dict) and not value:
return
try:
type_url = value['@type']
except KeyError:
raise ParseError(
'@type is missing when parsing any message at {0}'.format(path))
try:
sub_message = _CreateMessageFromTypeUrl(type_url, self.descriptor_pool)
except TypeError as e:
raise ParseError('{0} at {1}'.format(e, path))
message_descriptor = sub_message.DESCRIPTOR
full_name = message_descriptor.full_name
if _IsWrapperMessage(message_descriptor):
self._ConvertWrapperMessage(value['value'], sub_message,
'{0}.value'.format(path))
elif full_name in _WKTJSONMETHODS:
methodcaller(_WKTJSONMETHODS[full_name][1], value['value'], sub_message,
'{0}.value'.format(path))(
self)
else:
del value['@type']
self._ConvertFieldValuePair(value, sub_message, path)
value['@type'] = type_url
# Sets Any message
message.value = sub_message.SerializeToString()
message.type_url = type_url
def _ConvertGenericMessage(self, value, message, path):
"""Convert a JSON representation into message with FromJsonString."""
# Duration, Timestamp, FieldMask have a FromJsonString method to do the
# conversion. Users can also call the method directly.
try:
message.FromJsonString(value)
except ValueError as e:
raise ParseError('{0} at {1}'.format(e, path))
def _ConvertValueMessage(self, value, message, path):
"""Convert a JSON representation into Value message."""
if isinstance(value, dict):
self._ConvertStructMessage(value, message.struct_value, path)
elif isinstance(value, list):
self._ConvertListValueMessage(value, message.list_value, path)
elif value is None:
message.null_value = 0
elif isinstance(value, bool):
message.bool_value = value
elif isinstance(value, str):
message.string_value = value
elif isinstance(value, _INT_OR_FLOAT):
message.number_value = value
else:
raise ParseError('Value {0} has unexpected type {1} at {2}'.format(
value, type(value), path))
def _ConvertListValueMessage(self, value, message, path):
"""Convert a JSON representation into ListValue message."""
if not isinstance(value, list):
raise ParseError('ListValue must be in [] which is {0} at {1}'.format(
value, path))
message.ClearField('values')
for index, item in enumerate(value):
self._ConvertValueMessage(item, message.values.add(),
'{0}[{1}]'.format(path, index))
def _ConvertStructMessage(self, value, message, path):
"""Convert a JSON representation into Struct message."""
if not isinstance(value, dict):
raise ParseError('Struct must be in a dict which is {0} at {1}'.format(
value, path))
# Clear will mark the struct as modified so it will be created even if
# there are no values.
message.Clear()
for key in value:
self._ConvertValueMessage(value[key], message.fields[key],
'{0}.{1}'.format(path, key))
return
def _ConvertWrapperMessage(self, value, message, path):
"""Convert a JSON representation into Wrapper message."""
field = message.DESCRIPTOR.fields_by_name['value']
setattr(
message, 'value',
_ConvertScalarFieldValue(value, field, path='{0}.value'.format(path)))
def _ConvertMapFieldValue(self, value, message, field, path):
"""Convert map field value for a message map field.
Args:
value: A JSON object to convert the map field value.
message: A protocol message to record the converted data.
field: The descriptor of the map field to be converted.
path: parent path to log parse error info.
Raises:
ParseError: In case of convert problems.
"""
if not isinstance(value, dict):
raise ParseError(
'Map field {0} must be in a dict which is {1} at {2}'.format(
field.name, value, path))
key_field = field.message_type.fields_by_name['key']
value_field = field.message_type.fields_by_name['value']
for key in value:
key_value = _ConvertScalarFieldValue(key, key_field,
'{0}.key'.format(path), True)
if value_field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_MESSAGE:
self.ConvertMessage(value[key],
getattr(message, field.name)[key_value],
'{0}[{1}]'.format(path, key_value))
else:
getattr(message, field.name)[key_value] = _ConvertScalarFieldValue(
value[key], value_field, path='{0}[{1}]'.format(path, key_value))
def _ConvertScalarFieldValue(value, field, path, require_str=False):
"""Convert a single scalar field value.
Args:
value: A scalar value to convert the scalar field value.
field: The descriptor of the field to convert.
path: parent path to log parse error info.
require_str: If True, the field value must be a str.
Returns:
The converted scalar field value
Raises:
ParseError: In case of convert problems.
"""
try:
if field.cpp_type in _INT_TYPES:
return _ConvertInteger(value)
elif field.cpp_type in _FLOAT_TYPES:
return _ConvertFloat(value, field)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_BOOL:
return _ConvertBool(value, require_str)
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_STRING:
if field.type == descriptor.FieldDescriptor.TYPE_BYTES:
if isinstance(value, str):
encoded = value.encode('utf-8')
else:
encoded = value
# Add extra padding '='
padded_value = encoded + b'=' * (4 - len(encoded) % 4)
return base64.urlsafe_b64decode(padded_value)
else:
# Checking for unpaired surrogates appears to be unreliable,
# depending on the specific Python version, so we check manually.
if _UNPAIRED_SURROGATE_PATTERN.search(value):
raise ParseError('Unpaired surrogate')
return value
elif field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_ENUM:
# Convert an enum value.
enum_value = field.enum_type.values_by_name.get(value, None)
if enum_value is None:
try:
number = int(value)
enum_value = field.enum_type.values_by_number.get(number, None)
except ValueError:
raise ParseError('Invalid enum value {0} for enum type {1}'.format(
value, field.enum_type.full_name))
if enum_value is None:
if field.file.syntax == 'proto3':
# Proto3 accepts unknown enums.
return number
raise ParseError('Invalid enum value {0} for enum type {1}'.format(
value, field.enum_type.full_name))
return enum_value.number
except ParseError as e:
raise ParseError('{0} at {1}'.format(e, path))
def _ConvertInteger(value):
"""Convert an integer.
Args:
value: A scalar value to convert.
Returns:
The integer value.
Raises:
ParseError: If an integer couldn't be consumed.
"""
if isinstance(value, float) and not value.is_integer():
raise ParseError('Couldn\'t parse integer: {0}'.format(value))
if isinstance(value, str) and value.find(' ') != -1:
raise ParseError('Couldn\'t parse integer: "{0}"'.format(value))
if isinstance(value, bool):
raise ParseError('Bool value {0} is not acceptable for '
'integer field'.format(value))
return int(value)
def _ConvertFloat(value, field):
"""Convert an floating point number."""
if isinstance(value, float):
if math.isnan(value):
raise ParseError('Couldn\'t parse NaN, use quoted "NaN" instead')
if math.isinf(value):
if value > 0:
raise ParseError('Couldn\'t parse Infinity or value too large, '
'use quoted "Infinity" instead')
else:
raise ParseError('Couldn\'t parse -Infinity or value too small, '
'use quoted "-Infinity" instead')
if field.cpp_type == descriptor.FieldDescriptor.CPPTYPE_FLOAT:
# pylint: disable=protected-access
if value > type_checkers._FLOAT_MAX:
raise ParseError('Float value too large')
# pylint: disable=protected-access
if value < type_checkers._FLOAT_MIN:
raise ParseError('Float value too small')
if value == 'nan':
raise ParseError('Couldn\'t parse float "nan", use "NaN" instead')
try:
# Assume Python compatible syntax.
return float(value)
except ValueError:
# Check alternative spellings.
if value == _NEG_INFINITY:
return float('-inf')
elif value == _INFINITY:
return float('inf')
elif value == _NAN:
return float('nan')
else:
raise ParseError('Couldn\'t parse float: {0}'.format(value))
def _ConvertBool(value, require_str):
"""Convert a boolean value.
Args:
value: A scalar value to convert.
require_str: If True, value must be a str.
Returns:
The bool parsed.
Raises:
ParseError: If a boolean value couldn't be consumed.
"""
if require_str:
if value == 'true':
return True
elif value == 'false':
return False
else:
raise ParseError('Expected "true" or "false", not {0}'.format(value))
if not isinstance(value, bool):
raise ParseError('Expected true or false without quotes')
return value
_WKTJSONMETHODS = {
'google.protobuf.Any': ['_AnyMessageToJsonObject',
'_ConvertAnyMessage'],
'google.protobuf.Duration': ['_GenericMessageToJsonObject',
'_ConvertGenericMessage'],
'google.protobuf.FieldMask': ['_GenericMessageToJsonObject',
'_ConvertGenericMessage'],
'google.protobuf.ListValue': ['_ListValueMessageToJsonObject',
'_ConvertListValueMessage'],
'google.protobuf.Struct': ['_StructMessageToJsonObject',
'_ConvertStructMessage'],
'google.protobuf.Timestamp': ['_GenericMessageToJsonObject',
'_ConvertGenericMessage'],
'google.protobuf.Value': ['_ValueMessageToJsonObject',
'_ConvertValueMessage']
}

View file

@ -0,0 +1,424 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# TODO(robinson): We should just make these methods all "pure-virtual" and move
# all implementation out, into reflection.py for now.
"""Contains an abstract base class for protocol messages."""
__author__ = 'robinson@google.com (Will Robinson)'
class Error(Exception):
"""Base error type for this module."""
pass
class DecodeError(Error):
"""Exception raised when deserializing messages."""
pass
class EncodeError(Error):
"""Exception raised when serializing messages."""
pass
class Message(object):
"""Abstract base class for protocol messages.
Protocol message classes are almost always generated by the protocol
compiler. These generated types subclass Message and implement the methods
shown below.
"""
# TODO(robinson): Link to an HTML document here.
# TODO(robinson): Document that instances of this class will also
# have an Extensions attribute with __getitem__ and __setitem__.
# Again, not sure how to best convey this.
# TODO(robinson): Document that the class must also have a static
# RegisterExtension(extension_field) method.
# Not sure how to best express at this point.
# TODO(robinson): Document these fields and methods.
__slots__ = []
#: The :class:`google.protobuf.descriptor.Descriptor` for this message type.
DESCRIPTOR = None
def __deepcopy__(self, memo=None):
clone = type(self)()
clone.MergeFrom(self)
return clone
def __eq__(self, other_msg):
"""Recursively compares two messages by value and structure."""
raise NotImplementedError
def __ne__(self, other_msg):
# Can't just say self != other_msg, since that would infinitely recurse. :)
return not self == other_msg
def __hash__(self):
raise TypeError('unhashable object')
def __str__(self):
"""Outputs a human-readable representation of the message."""
raise NotImplementedError
def __unicode__(self):
"""Outputs a human-readable representation of the message."""
raise NotImplementedError
def MergeFrom(self, other_msg):
"""Merges the contents of the specified message into current message.
This method merges the contents of the specified message into the current
message. Singular fields that are set in the specified message overwrite
the corresponding fields in the current message. Repeated fields are
appended. Singular sub-messages and groups are recursively merged.
Args:
other_msg (Message): A message to merge into the current message.
"""
raise NotImplementedError
def CopyFrom(self, other_msg):
"""Copies the content of the specified message into the current message.
The method clears the current message and then merges the specified
message using MergeFrom.
Args:
other_msg (Message): A message to copy into the current one.
"""
if self is other_msg:
return
self.Clear()
self.MergeFrom(other_msg)
def Clear(self):
"""Clears all data that was set in the message."""
raise NotImplementedError
def SetInParent(self):
"""Mark this as present in the parent.
This normally happens automatically when you assign a field of a
sub-message, but sometimes you want to make the sub-message
present while keeping it empty. If you find yourself using this,
you may want to reconsider your design.
"""
raise NotImplementedError
def IsInitialized(self):
"""Checks if the message is initialized.
Returns:
bool: The method returns True if the message is initialized (i.e. all of
its required fields are set).
"""
raise NotImplementedError
# TODO(robinson): MergeFromString() should probably return None and be
# implemented in terms of a helper that returns the # of bytes read. Our
# deserialization routines would use the helper when recursively
# deserializing, but the end user would almost always just want the no-return
# MergeFromString().
def MergeFromString(self, serialized):
"""Merges serialized protocol buffer data into this message.
When we find a field in `serialized` that is already present
in this message:
- If it's a "repeated" field, we append to the end of our list.
- Else, if it's a scalar, we overwrite our field.
- Else, (it's a nonrepeated composite), we recursively merge
into the existing composite.
Args:
serialized (bytes): Any object that allows us to call
``memoryview(serialized)`` to access a string of bytes using the
buffer interface.
Returns:
int: The number of bytes read from `serialized`.
For non-group messages, this will always be `len(serialized)`,
but for messages which are actually groups, this will
generally be less than `len(serialized)`, since we must
stop when we reach an ``END_GROUP`` tag. Note that if
we *do* stop because of an ``END_GROUP`` tag, the number
of bytes returned does not include the bytes
for the ``END_GROUP`` tag information.
Raises:
DecodeError: if the input cannot be parsed.
"""
# TODO(robinson): Document handling of unknown fields.
# TODO(robinson): When we switch to a helper, this will return None.
raise NotImplementedError
def ParseFromString(self, serialized):
"""Parse serialized protocol buffer data into this message.
Like :func:`MergeFromString()`, except we clear the object first.
Raises:
message.DecodeError if the input cannot be parsed.
"""
self.Clear()
return self.MergeFromString(serialized)
def SerializeToString(self, **kwargs):
"""Serializes the protocol message to a binary string.
Keyword Args:
deterministic (bool): If true, requests deterministic serialization
of the protobuf, with predictable ordering of map keys.
Returns:
A binary string representation of the message if all of the required
fields in the message are set (i.e. the message is initialized).
Raises:
EncodeError: if the message isn't initialized (see :func:`IsInitialized`).
"""
raise NotImplementedError
def SerializePartialToString(self, **kwargs):
"""Serializes the protocol message to a binary string.
This method is similar to SerializeToString but doesn't check if the
message is initialized.
Keyword Args:
deterministic (bool): If true, requests deterministic serialization
of the protobuf, with predictable ordering of map keys.
Returns:
bytes: A serialized representation of the partial message.
"""
raise NotImplementedError
# TODO(robinson): Decide whether we like these better
# than auto-generated has_foo() and clear_foo() methods
# on the instances themselves. This way is less consistent
# with C++, but it makes reflection-type access easier and
# reduces the number of magically autogenerated things.
#
# TODO(robinson): Be sure to document (and test) exactly
# which field names are accepted here. Are we case-sensitive?
# What do we do with fields that share names with Python keywords
# like 'lambda' and 'yield'?
#
# nnorwitz says:
# """
# Typically (in python), an underscore is appended to names that are
# keywords. So they would become lambda_ or yield_.
# """
def ListFields(self):
"""Returns a list of (FieldDescriptor, value) tuples for present fields.
A message field is non-empty if HasField() would return true. A singular
primitive field is non-empty if HasField() would return true in proto2 or it
is non zero in proto3. A repeated field is non-empty if it contains at least
one element. The fields are ordered by field number.
Returns:
list[tuple(FieldDescriptor, value)]: field descriptors and values
for all fields in the message which are not empty. The values vary by
field type.
"""
raise NotImplementedError
def HasField(self, field_name):
"""Checks if a certain field is set for the message.
For a oneof group, checks if any field inside is set. Note that if the
field_name is not defined in the message descriptor, :exc:`ValueError` will
be raised.
Args:
field_name (str): The name of the field to check for presence.
Returns:
bool: Whether a value has been set for the named field.
Raises:
ValueError: if the `field_name` is not a member of this message.
"""
raise NotImplementedError
def ClearField(self, field_name):
"""Clears the contents of a given field.
Inside a oneof group, clears the field set. If the name neither refers to a
defined field or oneof group, :exc:`ValueError` is raised.
Args:
field_name (str): The name of the field to check for presence.
Raises:
ValueError: if the `field_name` is not a member of this message.
"""
raise NotImplementedError
def WhichOneof(self, oneof_group):
"""Returns the name of the field that is set inside a oneof group.
If no field is set, returns None.
Args:
oneof_group (str): the name of the oneof group to check.
Returns:
str or None: The name of the group that is set, or None.
Raises:
ValueError: no group with the given name exists
"""
raise NotImplementedError
def HasExtension(self, extension_handle):
"""Checks if a certain extension is present for this message.
Extensions are retrieved using the :attr:`Extensions` mapping (if present).
Args:
extension_handle: The handle for the extension to check.
Returns:
bool: Whether the extension is present for this message.
Raises:
KeyError: if the extension is repeated. Similar to repeated fields,
there is no separate notion of presence: a "not present" repeated
extension is an empty list.
"""
raise NotImplementedError
def ClearExtension(self, extension_handle):
"""Clears the contents of a given extension.
Args:
extension_handle: The handle for the extension to clear.
"""
raise NotImplementedError
def UnknownFields(self):
"""Returns the UnknownFieldSet.
Returns:
UnknownFieldSet: The unknown fields stored in this message.
"""
raise NotImplementedError
def DiscardUnknownFields(self):
"""Clears all fields in the :class:`UnknownFieldSet`.
This operation is recursive for nested message.
"""
raise NotImplementedError
def ByteSize(self):
"""Returns the serialized size of this message.
Recursively calls ByteSize() on all contained messages.
Returns:
int: The number of bytes required to serialize this message.
"""
raise NotImplementedError
@classmethod
def FromString(cls, s):
raise NotImplementedError
@staticmethod
def RegisterExtension(extension_handle):
raise NotImplementedError
def _SetListener(self, message_listener):
"""Internal method used by the protocol message implementation.
Clients should not call this directly.
Sets a listener that this message will call on certain state transitions.
The purpose of this method is to register back-edges from children to
parents at runtime, for the purpose of setting "has" bits and
byte-size-dirty bits in the parent and ancestor objects whenever a child or
descendant object is modified.
If the client wants to disconnect this Message from the object tree, she
explicitly sets callback to None.
If message_listener is None, unregisters any existing listener. Otherwise,
message_listener must implement the MessageListener interface in
internal/message_listener.py, and we discard any listener registered
via a previous _SetListener() call.
"""
raise NotImplementedError
def __getstate__(self):
"""Support the pickle protocol."""
return dict(serialized=self.SerializePartialToString())
def __setstate__(self, state):
"""Support the pickle protocol."""
self.__init__()
serialized = state['serialized']
# On Python 3, using encoding='latin1' is required for unpickling
# protos pickled by Python 2.
if not isinstance(serialized, bytes):
serialized = serialized.encode('latin1')
self.ParseFromString(serialized)
def __reduce__(self):
message_descriptor = self.DESCRIPTOR
if message_descriptor.containing_type is None:
return type(self), (), self.__getstate__()
# the message type must be nested.
# Python does not pickle nested classes; use the symbol_database on the
# receiving end.
container = message_descriptor
return (_InternalConstructMessage, (container.full_name,),
self.__getstate__())
def _InternalConstructMessage(full_name):
"""Constructs a nested message."""
from google.protobuf import symbol_database # pylint:disable=g-import-not-at-top
return symbol_database.Default().GetSymbol(full_name)()

View file

@ -0,0 +1,185 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides a factory class for generating dynamic messages.
The easiest way to use this class is if you have access to the FileDescriptor
protos containing the messages you want to create you can just do the following:
message_classes = message_factory.GetMessages(iterable_of_file_descriptors)
my_proto_instance = message_classes['some.proto.package.MessageName']()
"""
__author__ = 'matthewtoia@google.com (Matt Toia)'
from google.protobuf.internal import api_implementation
from google.protobuf import descriptor_pool
from google.protobuf import message
if api_implementation.Type() == 'cpp':
from google.protobuf.pyext import cpp_message as message_impl
else:
from google.protobuf.internal import python_message as message_impl
# The type of all Message classes.
_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType
class MessageFactory(object):
"""Factory for creating Proto2 messages from descriptors in a pool."""
def __init__(self, pool=None):
"""Initializes a new factory."""
self.pool = pool or descriptor_pool.DescriptorPool()
# local cache of all classes built from protobuf descriptors
self._classes = {}
def GetPrototype(self, descriptor):
"""Obtains a proto2 message class based on the passed in descriptor.
Passing a descriptor with a fully qualified name matching a previous
invocation will cause the same class to be returned.
Args:
descriptor: The descriptor to build from.
Returns:
A class describing the passed in descriptor.
"""
if descriptor not in self._classes:
result_class = self.CreatePrototype(descriptor)
# The assignment to _classes is redundant for the base implementation, but
# might avoid confusion in cases where CreatePrototype gets overridden and
# does not call the base implementation.
self._classes[descriptor] = result_class
return result_class
return self._classes[descriptor]
def CreatePrototype(self, descriptor):
"""Builds a proto2 message class based on the passed in descriptor.
Don't call this function directly, it always creates a new class. Call
GetPrototype() instead. This method is meant to be overridden in subblasses
to perform additional operations on the newly constructed class.
Args:
descriptor: The descriptor to build from.
Returns:
A class describing the passed in descriptor.
"""
descriptor_name = descriptor.name
result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE(
descriptor_name,
(message.Message,),
{
'DESCRIPTOR': descriptor,
# If module not set, it wrongly points to message_factory module.
'__module__': None,
})
result_class._FACTORY = self # pylint: disable=protected-access
# Assign in _classes before doing recursive calls to avoid infinite
# recursion.
self._classes[descriptor] = result_class
for field in descriptor.fields:
if field.message_type:
self.GetPrototype(field.message_type)
for extension in result_class.DESCRIPTOR.extensions:
if extension.containing_type not in self._classes:
self.GetPrototype(extension.containing_type)
extended_class = self._classes[extension.containing_type]
extended_class.RegisterExtension(extension)
return result_class
def GetMessages(self, files):
"""Gets all the messages from a specified file.
This will find and resolve dependencies, failing if the descriptor
pool cannot satisfy them.
Args:
files: The file names to extract messages from.
Returns:
A dictionary mapping proto names to the message classes. This will include
any dependent messages as well as any messages defined in the same file as
a specified message.
"""
result = {}
for file_name in files:
file_desc = self.pool.FindFileByName(file_name)
for desc in file_desc.message_types_by_name.values():
result[desc.full_name] = self.GetPrototype(desc)
# While the extension FieldDescriptors are created by the descriptor pool,
# the python classes created in the factory need them to be registered
# explicitly, which is done below.
#
# The call to RegisterExtension will specifically check if the
# extension was already registered on the object and either
# ignore the registration if the original was the same, or raise
# an error if they were different.
for extension in file_desc.extensions_by_name.values():
if extension.containing_type not in self._classes:
self.GetPrototype(extension.containing_type)
extended_class = self._classes[extension.containing_type]
extended_class.RegisterExtension(extension)
return result
_FACTORY = MessageFactory()
def GetMessages(file_protos):
"""Builds a dictionary of all the messages available in a set of files.
Args:
file_protos: Iterable of FileDescriptorProto to build messages out of.
Returns:
A dictionary mapping proto names to the message classes. This will include
any dependent messages as well as any messages defined in the same file as
a specified message.
"""
# The cpp implementation of the protocol buffer library requires to add the
# message in topological order of the dependency graph.
file_by_name = {file_proto.name: file_proto for file_proto in file_protos}
def _AddFile(file_proto):
for dependency in file_proto.dependency:
if dependency in file_by_name:
# Remove from elements to be visited, in order to cut cycles.
_AddFile(file_by_name.pop(dependency))
_FACTORY.pool.Add(file_proto)
while file_by_name:
_AddFile(file_by_name.popitem()[1])
return _FACTORY.GetMessages([file_proto.name for file_proto in file_protos])

Some files were not shown because too many files have changed in this diff Show more