OP-2042 - added a bit of documentation

This commit is contained in:
Petr Kalis 2021-12-03 17:33:50 +01:00
parent b61b3634b2
commit efa7bffe06

View file

@ -5,16 +5,23 @@ Contains end-to-end tests for automatic testing of OP.
Should run headless publish on all hosts to check basic publish use cases automatically
to limit regression issues.
How to run
----------
- activate `{OPENPYPE_ROOT}/.venv`
- run in cmd
`{OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py runtests {OPENPYPE_ROOT}/tests/integration`
- add `hosts/APP_NAME` after integration part to limit only on specific app (eg. `{OPENPYPE_ROOT}/tests/integration/hosts/maya`)
How to create test for publishing from host
------------------------------------------
- Extend PublishTest
- Extend PublishTest in `tests/lib/testing_classes.py`
- Use `resources\test_data.zip` skeleton file as a template for testing input data
- Put workfile into `test_data.zip/input/workfile`
- If you require other than base DB dumps provide them to `test_data.zip/input/dumps`
-- (Check commented code in `db_handler.py` how to dump specific DB. Currently all collections will be dumped.)
- Implement `last_workfile_path`
- `startup_scripts` - must contain pointing host to startup script saved into `test_data.zip/input/startup`
-- Script must contain something like
-- Script must contain something like (pseudocode)
```
import openpype
from avalon import api, HOST
@ -25,13 +32,18 @@ pyblish.util.publish()
EXIT_APP (command to exit host)
```
(Install and publish methods must be triggered only AFTER host app is fully initialized!)
- Zip `test_data.zip`, named it with descriptive name, upload it to Google Drive, right click - `Get link`, copy hash id
- If you would like add any command line arguments for your host app add it to `test_data.zip/input/app_args/app_args.json` (as a json list)
- Provide any required environment variables to `test_data.zip/input/env_vars/env_vars.json` (as a json dictionary)
- Zip `test_data.zip`, named it with descriptive name, upload it to Google Drive, right click - `Get link`, copy hash id (file must be accessible to anyone with a link!)
- Put this hash id and zip file name into TEST_FILES [(HASH_ID, FILE_NAME, MD5_OPTIONAL)]. If you want to check MD5 of downloaded
file, provide md5 value of zipped file.
- Implement any assert checks you need in extended class
- Run test class manually (via Pycharm or pytest runner (TODO))
- If you want test to compare expected files to published one, set PERSIST to True, run test manually
- If you want test to visually compare expected files to published one, set PERSIST to True, run test manually
-- Locate temporary `publish` subfolder of temporary folder (found in debugging console log)
-- Copy whole folder content into .zip file into `expected` subfolder
-- By default tests are comparing only structure of `expected` and published format (eg. if you want to save space, replace published files with empty files, but with expected names!)
-- Zip and upload again, change PERSIST to False
-- Zip and upload again, change PERSIST to False
- Use `TEST_DATA_FOLDER` variable in your class to reuse existing downloaded and unzipped test data (for faster creation of tests)
- Keep `APP_VARIANT` empty if you want to trigger test on latest version of app, or provide explicit value (as '2022' for Photoshop for example)