move documentation from openpype.io

This commit is contained in:
Milan Kolar 2021-04-06 20:47:07 +02:00
parent 674844eeaf
commit 0851cf0886
296 changed files with 19633 additions and 2 deletions

58
.github/workflows/documentation.yml vendored Normal file
View file

@ -0,0 +1,58 @@
name: documentation
on:
pull_request:
branches: [develop]
push:
branches: [main]
jobs:
check-build:
if: github.event_name != 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions/setup-node@v1
with:
node-version: '12.x'
- name: Test Build
run: |
cd website
if [ -e yarn.lock ]; then
yarn install --frozen-lockfile
elif [ -e package-lock.json ]; then
npm ci
else
npm i
fi
npm run build
deploy-website:
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
steps:
- name: 🚚 Get latest code
uses: actions/checkout@v2
- uses: actions/setup-node@v1
with:
node-version: '12.x'
- name: 🔨 Build
run: |
cd website
if [ -e yarn.lock ]; then
yarn install --frozen-lockfile
elif [ -e package-lock.json ]; then
npm ci
else
npm i
fi
npm run build
- name: 📂 Sync files
uses: SamKirkland/FTP-Deploy-Action@4.0.0
with:
server: ftp.openpype.io
username: ${{ secrets.ftp_user }}
password: ${{ secrets.ftp_password }}
local-dir: ./website/build/

15
.gitignore vendored
View file

@ -67,7 +67,7 @@ coverage.xml
# Node JS packages
##################
node_modules/
node_modules
package-lock.json
openpype/premiere/ppro/js/debug.log
@ -81,4 +81,15 @@ openpype/premiere/ppro/js/debug.log
.vscode/
.env
dump.sql
test_localsystem.txt
test_localsystem.txt
# website
##########
website/translated_docs
website/build/
website/node_modules
website/i18n/*
website/debug.log
website/.docusaurus

View file

@ -0,0 +1,78 @@
---
id: admin_anatomy
title: Project Anatomy
sidebar_label: Folder Structure
---
## PROJECT Structure
This is example project structure when using Pype:
```text
Project
├───assets
│ ├───Bob
│ └───...
└───episodes
└───ep01
└───sq01
└───ep01_sq01_sh001
├───publish
└───work
```
:::note Shot naming
We do strongly recommend to name shots with their full hierarchical name. Avalon doesn't allow two assets with same name in project. Therefor if you have for example:
```text
sequence01 / shot001
```
and then
```text
sequence02 / shot001
```
you'll run into trouble because there are now two `shot001`.
Better way is to use full qualified name for shot. So the above become:
```text
sequence01 / sequence01_shot001
```
This has two advantages: there will be no duplicities this way and artists can see just by looking at filename the whole hierarchy.
:::
## ASSET Structure
```text
Bob
├───publish
│ ├───model
│ │ ├───modelMain
│ │ ├───modelProxy
│ │ └───modelSculpt
│ ├───workfile
│ │ └───taskName
│ ├───rig
│ │ └───rigMain
│ ├───look
│ │ ├───lookMain
│ │ │ └───v01
│ │ │ └───texture
│ │ └───lookWet
│ ├───camera
│ │ ├───camMain
│ │ └───camLayout
│ ├───cache
│ │ ├───cacheChar01
│ │ └───cacheRock01
│ ├───vrproxy
│ ├───fx
│ └───setdress
└───work
├───concept
├───fur
├───modelling
├───rig
├───look
└───taskName
```

View file

@ -0,0 +1,392 @@
---
id: admin_config
title: Studio Config
sidebar_label: Studio Config
---
All of the studio specific configurations are stored as simple JSON files in the **pype-config** repository.
Config is split into multiple sections described below.
## Anatomy
Defines where and how folders and files are created for all the project data. Anatomy has two parts **Roots** and **Templates**.
:::warning
It is recommended to create anatomy [overrides](#per-project-configuration) for each project even if values haven't changed. Ignoring this recommendation may cause catastrophic consequences.
:::
### Roots
Roots define where files are stored with path to shared folder. You can set them in `roots.json`.
It is required to set root path for each platform you are using in studio. All paths must point to same folder!
```json
{
"windows": "P:/projects",
"darwin": "/Volumes/projects",
"linux": "/mnt/share/projects"
}
```
It is possible to set multiple roots when necessary. That may be handy when you need to store specific type of data on another disk. In that case you'll have to add one level in json.
```json
{
"work": {
"windows": "P:/work",
"darwin": "/Volumes/work",
"linux": "/mnt/share/work"
},
"publish": {
"windows": "Y:/publish",
"darwin": "/Volumes/publish",
"linux": "/mnt/share/publish"
}
}
```
Usage of multiple roots is explained below in templates part.
### Templates
Templates define project's folder structure and filenames. You can set them in `default.yaml`.
### Required templates
We have a few required anatomy templates for Pype to work properly, however we keep adding more when needed.
```yaml
work:
folder: "{root}/{project[name]}/{hierarchy}/{asset}/work/{task}"
file: "{project[code]}_{asset}_{task}_v{version:0>3}<_{comment}>.{ext}"
path: "{root}/{project[name]}/{hierarchy}/{asset}/work/{task}/{project[code]}_{asset}_{task}_v{version:0>3}<_{comment}>.{ext}"
publish:
folder: "{root}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/v{version:0>3}"
file: "{project[code]}_{asset}_{subset}_v{version:0>3}<.{frame}>.{representation}"
path: "{root}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/v{version:0>3}/{project[code]}_{asset}_{subset}_v{version:0>3}<.{frame}>.{representation}"
```
Template groups `work` and `publish` must be set in all circumstances. Both must have set keys as shown `folder`, holds path template for the directory where the files are stored, `file` only holds the filename and `path` combines the two together for quicker access.
### Available keys
| Context key | Description |
| --- | --- |
| root | Path to root folder |
| root[\<root name\>] | Path to root folder when multiple roots are used.<br />Key `<root name>` represents root key specified in `roots.json` |
| project[name] | Project's full name. |
| project[code] | Project's code. |
| hierarchy | All hierarchical parents as subfolders. |
| asset | Name of asset or shot. |
| task | Name of task. |
| version | Version number. |
| subset | Subset name. |
| family | Main family name. |
| ext | File extention. (Possible to use only in `work` template atm.) |
| representation | Representation name. (Is used instead of `ext` except `work` template atm.) |
| frame | Frame number for sequence files. |
| output | |
| comment | |
:::warning
Be careful about using `root` key in templates when using multiple roots. It is not allowed to combine both `{root}` and `{root[<root name>]}` in templates.
:::
:::note
It is recommended to set padding for `version` which is possible with additional expression in template. Entered key `{version:0<3}` will result into `001` if version `1` is published.
**Explanation:** Expression `0<3` will add `"0"` char to the beginning(`<`) until string has `3` characters.
:::
| Date-Time key | Example result | Description |
| --- | --- | --- |
| d | 1, 30 | Day of month in shortest possible way. |
| dd | 01, 30 | Day of month with 2 digits. |
| ddd | Mon | Shortened week day name. |
| dddd | Monday | Full week day name. |
| m | 1, 12 | Month number in shortest possible way. |
| mm | 01, 12 | Month number with 2 digits. |
| mmm | Jan | Shortened month name. |
| mmmm | January | Full month name. |
| yy | 20 | Shortened year. |
| yyyy | 2020 | Full year. |
| H | 4, 17 | Shortened 24-hour number. |
| HH | 04, 17 | 24-hour number with 2 digits. |
| h | 5 | Shortened 12-hour number. |
| hh | 05 | 12-hour number with 2 digits. |
| ht | AM, PM | Midday part. |
| M | 0 | Shortened minutes number. |
| MM | 00 | Minutes number with 2 digits. |
| S | 0 | Shortened seconds number. |
| SS | 00 | Seconds number with 2 digits. |
### Optional keys
Keys may be optional for some reason when are wrapped with `<` and `>`. But it is recommended to use only for these specific keys with obvious reasons:
- `output`, `comment` are optional to fill
- `frame` is used only for sequences.
### Inner keys
It is possible to use value of one template key inside value of another template key. This can be done only per template group, which means it is not possible to use template key from `publish` group inside `work` group.
Usage is similar to using template keys but instead of `{key}` you must add `@` in front of key: `{@key}`
With this feature `work` template from example above may be much easier to read and modify.
```yaml
work:
folder: "{root}/{project[name]}/{hierarchy}/{asset}/work/{task}"
file: "{project[code]}_{asset}_{task}_v{version:0>3}<_{comment}>.{ext}"
path: "{@folder}/{@file}"
# This is how `path` key will look as result
# path: "{root}/{project[name]}/{hierarchy}/{asset}/work/{task}/{project[code]}_{asset}_{task}_v{version:0>3}<_{comment}>.{ext}"
```
:::warning
Be aware of unsolvable recursion in inner keys.
```yaml
group:
# Use key where source key is used in value
key_1: "{@key_2}"
key_2: "{@key_1}"
# Use itself
key_3: "{@key_3}"
```
:::
### Global keys
Global keys are keys with value outside template groups. All these keys will be available in each template group with ability to override them inside the group.
**Source**
```yaml
# Global key outside template group
global_key: "global value"
group_1:
# `global_key` is not set
example_key_1: "{example_value_1}"
group_2:
# `global_key` is iverrided
global_key: "overriden global value"
```
**Result**
```yaml
global_key: "global value"
group_1:
# `global_key` was added
global_key: "global value"
example_key_1: "{example_value_1}"
group_2:
# `global_key` kept it's value for `group_2`
global_key: "overriden global value"
```
### Combine Inner keys with Global keys
Real power of [Inner](#inner-keys) and [Global](#global-keys) keys is their combination.
**Template source**
```yaml
# PADDING
frame_padding: 4
frame: "{frame:0>frame_padding}"
# MULTIPLE ROOT
root_name: "root_name_1"
root: {root[{@root_name}]}
group_1:
example_key_1: "{@root}/{@frame}"
group_2:
frame_padding: 3
root_name: "root_name_2"
example_key_2: "{@root}/{@frame}"
group_3:
frame: "{frame}"
example_key_3: "{@root}/force_value/{@frame}"
```
**Equals**
```yaml
frame_padding: 4
frame: "{frame:0>3}"
root_name: "root_name_1"
root: {root[root_name_1]}
group_1:
frame_padding: 4
frame: "{frame:0>3}"
root_name: "root_name_1"
root: {root[root_name_1]}
# `example_key_1` result
example_key_1: "{root[root_name_1]}/{frame:0>3}"
group_2:
frame_padding: 3
frame: "{frame:0>3}"
root_name: "root_name_2"
root: {root[root_name_2]}
# `example_key_2` result
example_key_2: "{root[root_name_2]}/{frame:0>2}"
group_3:
frame_padding: 4
frame: "{frame}"
root_name: "root_name_1"
root: {root[root_name_1]}
# `example_key_3` result
example_key_3: "{root[root_name_1]}/force_value/{frame}"
```
:::warning
Be careful about using global keys. Keep in mind that **all global keys** will be added to **all template groups** and all inner keys in their values **MUST** be in the group.
For example in [required templates](#required-templates) it seems that `path: "{@folder}/{@file}"` should be used as global key, but that would require all template groups have `folder` and `file` keys which is not true by default.
:::
## Environments
Here is where all the environment variables are set up. Each software has it's own environment file where we set all variables needed for it to function correctly. This is also a place where any extra in-house variables should be added. All of these individual configs and then loaded additively as needed based on current context.
For example when launching Pype Tray, **Global** and **Avalon** envs are loaded first. If the studio uses also *Deadline* and *Ftrack*, both of those environments get added at the same time. This sets the base environment for the rest of the pipeline that will be inherited by all the applications launched from this point on.
When user launches an application for a task, its general and versioned env files get added to the base before the software launches. When launching *Maya 2019*, both `maya.json` and `maya_2019.json` will be added.
If the project or task also has extra tools configured, say *Arnold Mtoa 3.1.1*, a config JSON with the same name will be added too.
This way the environment is completely dynamic with possibility of overrides on a granular level, from project all the way down to task.
## Launchers
Considering that different studios use different ways of deploying software to their workstations, we need to tell Pype how to launch all the individual applications available in the studio.
Each software need multiple files prepared for it to function correctly.
```text
application_name.toml
application_name.bat
application_name.sh
```
TOML file tells Pype how to work with the application across the board. Icons, Label in GUI, *Ftrack* settings but most importantly it defines what executable to run. These executable are stored in the windows and linux subfolder in the launchers folder. If `application_name.toml` defines that executable to run is `application_name`, Pype assumes that a `.bat` and `.sh` files under that name exist in the linux and windows folders in launchers. Correct version is picked automatically based on the platform Pype is running on.
These `.bat` and `.sh` scripts have only one job then. They have to point to the exact executable path on the system, or to a command that will launch the app we want. Version granularity is up to the studio to decide. We can show artists Nuke 11.3, while specifying the particular version 11.3v4 only in the .bat file, so the artist doesn't need to deal with it, or we can present him with 11.3v4 directly. the choice is mostly between artist control vs more configuration files on the system.
## Presets
This is where most of the functions configuration of the pipeline happens. Colorspace, data types, burnin setting, geometry naming conventions, ftrack attributes, playblast settings, types of exports and lot's of other settings.
Presets are categorized in folders based on what they control or what host (DCC application) they are for. We're slowly working on documenting them all, but new ones are being created regularly as well. Hopefully the categories and names are sufficiently self-explanatory.
### colorspace
Defines all available color spaces in the studio. These configs not only tell the system what OCIO to use, but also how exactly it needs to be applied in the give application. From loading the data, trough previewing it all the way to rendered
### Dataflow
Defines allowed file types and data formats across the pipeline including their particular coded and compression settings.
### Plugins
All the creator, loader and publisher configurations are stored here. We can override any properties of the default plugin values and more.
#### How does it work
Overriding plugin properties is as simple as adding what needs to be changed to
JSON file along with plugin name.
Say you have name validating plugin:
```python
import pyblish.api
class ValidateModelName(pyblish.api.InstancePlugin):
order = pype.api.ValidateContentsOrder
hosts = ['maya']
families = ['model']
label = 'Validate Mesh Name'
# check for: 'foo_001_bar_GEO`
regex = r'.*_\d*_.*_GEO'
def process(self, instance):
# pseudocode to get nodes
models = get_models(instance.data.get("setMembers", None))
r = re.compile(self.regex)
for model in models:
m = r.match(obj)
if m is None:
raise RuntimeError("invalid name on {}".format(model))
```
_This is just non-functional example_
Instead of creating new plugin with different regex, you can put:
```javascript
"ValidateModelName": {
"regex": ".*\\d*_.*_geometry"
}
```
and put it into `repos/pype-config/presets/plugins/maya/publish.json`. There can be more entries
like that for how many plugins you need.
That will effectively replace regex defined in plugin during runtime with the one you've just
defined in JSON file. This way you can change any properties defined in plugin.
:::tip loader and creators
Similar way exist for *Loaders* and *Creators*. Use files `create.json` for Creators, `load.json`
for Loaders and `publish.json` for **Pyblish** plugins like extractors, validators, etc.
Preset resolution works by getting host name (for example *Maya*) and then looking inside
`repos/pype-config/presets/plugins/<host>/publish.json` path. If plugin is not found, then
`repos/pype-config/presets/plugins/global/publish.json` is tried.
:::
:::tip Per project plugin override
You can override plugins per project. See [Per-project configuration](#per-project-configuration)
:::
## Schema
Holds all database schemas for *mongoDB*, that we use. In practice these are never changed on a per studio basis, however we included them in the config for cases where a particular project might need a very individual treatment.
## Per-project configuration
You can have per-project configuration with Pype. This allows you to have for example different
validation requirements, file naming, etc.
This is very easy to set up - point `PYPE_PROJECT_CONFIGS` environment variable to place
where you want those per-project configurations. Then just create directory with project name and
that's almost it. Inside, you can follow hierarchy of **pype-config** presets. Everything put there
will override stuff in **pype-config**.
### Example
You have a project where you need to disable some validators - let's say overlapping
UVs validator in Maya.
Project name is *FooProject*.
Your `PYPE_PROJECT_CONFIGS` points to `/studio/pype/projects`.
Create projects settings directory:
```sh
mkdir $PYPE_PROJECT_CONFIGS/FooProject
```
Now you can use plugin overrides to disable validator:
Put:
```javascript
{
"ValidateMeshHasOverlappingUVs": {
"enabled": false
}
}
```
into:
```sh
$PYPE_PROJECT_CONFIGS/FooPoject/presets/plugins/maya/publish.json
```
And its done. **ValidateMeshHasOverlappingUVs** is a class name of validator - you can
find that name by looking into python file containing validator code, or in Pyblish GUI.
That way you can make it optional or set whatever properties you want on plugins and those
settings will take precedence over the default site-wide settings.

View file

@ -0,0 +1,50 @@
---
id: admin_docsexamples
title: Examples of using notes
sidebar_label: docsexamples
---
:::important
- This is my note
- another list
- super list
```python
import os
print(os.environ)
```
:::
:::tip
This is my note
:::
:::note
This is my note
:::
:::warning
This is my note
:::
:::caution
This is my note
:::
export const Highlight = ({children, color}) => (
<span
style={{
backgroundColor: color,
borderRadius: '2px',
color: '#fff',
padding: '0.2rem',
}}>
{children}
</span>
);
<Highlight color="#25c2a0">Docusaurus green</Highlight> and <Highlight color="#1877F2">Facebook blue</Highlight> are my favorite colors.
I can write **Markdown** alongside my _JSX_!

View file

@ -0,0 +1,203 @@
---
id: admin_ftrack
title: Ftrack Setup
sidebar_label: Ftrack Setup
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Ftrack is currently the main project management option for Pype. This documentaion assumes that you are familiar with Ftrack and it's basic principles. If you're new to Ftrack, we recommend having a thorough look at [Ftrack Official Documentation](http://ftrack.rtd.ftrack.com/en/stable/).
## Prepare Ftrack for Pype
If you want to connect Ftrack to Pype you might need to make few changes in Ftrack settings. These changes would take a long time to do manually, so we prepared a few Ftrack actions to help you out. First, you'll need to launch Pype's tray application and set [Ftrack credentials](#credentials) to be able to run our Ftrack actions.
The only action that is strictly required is [Pype Admin - Create/Update Avalon Attributes](manager_ftrack_actions#create-update-avalon-attributes), which creates and sets the Custom Attributes necessary needed for Pype to function. If you want to use pype only for new projects then you should read about best practice with [new project](#new-project).
If you want to switch projects that are already in production, you might also need to run [Pype Doctor - Custom attr doc](manager_ftrack_actions#custom-attr-doc).
:::caution
Keep in mind that **Custom attr doc** action will migrate certain attributes from ftrack default ones to our custom attributes. Some attributes will also be renamed. We make backup of the values, but be very carefull with this option and consults us before running it.
:::
## Event Server
Ftrack Event Server is the key to automation of many tasks like _status change_, _thumbnail update_, _automatic synchronization to Avalon database_ and many more. Event server should run at all times to perform all the required processing as it is not possible to catch some of them retrospectively with enough certainty.
### Running event server
There are specific launch arguments for event server. With `$PYPE_SETUP/pype eventserver` you can launch event server but without prior preparation it will terminate immediately. The reason is that event server requires 3 pieces of information: _Ftrack server url_, _paths to events_ and _Credentials (Username and API key)_. Ftrack server URL and Event path are set from Pype's environments by default, but the credentials must be done separatelly for security reasons.
:::note There are 2 ways of passing your credentials to event server.
<Tabs
defaultValue="args"
values={[
{label: 'Additional Arguments', value: 'args'},
{label: 'Environments Variables', value: 'env'}
]}>
<TabItem value="args">
- **`--ftrack-user "your.username"`** : Ftrack Username
- **`--ftrack-api-key "00000aaa-11bb-22cc-33dd-444444eeeee"`** : User's API key
- **`--store-crededentials`** : Entered credentials will be stored for next launch with this argument _(It is not needed to enter **ftrackuser** and **ftrackapikey** args on next launch)_
- **`--no-stored-credentials`** : Stored credentials are loaded first so if you want to change credentials use this argument
- `--ftrack-url "https://yourdomain.ftrackapp.com/"` : Ftrack server URL _(it is not needed to enter if you have set `FTRACK_SERVER` in Pype' environments)_
- `--ftrack-events-path "//Paths/To/Events/"` : Paths to events folder. May contain multiple paths separated by `;`. _(it is not needed to enter if you have set `FTRACK_EVENTS_PATH` in Pype' environments)_
So if you want to use Pype's environments then you can launch event server for first time with these arguments `$PYPE_SETUP/pype eventserver --ftrack-user "my.username" --ftrack-api-key "00000aaa-11bb-22cc-33dd-444444eeeee" --store-credentials`. Since that time, if everything was entered correctly, you can launch event server with `$PYPE_SETUP/pype eventserver`.
</TabItem>
<TabItem value="env">
- `FTRACK_API_USER` - Username _("your.username")_
- `FTRACK_API_KEY` - User's API key _("00000aaa-11bb-22cc-33dd-444444eeeee")_
- `FTRACK_SERVER` - Ftrack server url _("<https://yourdomain.ftrackapp.com/">)_
- `FTRACK_EVENTS_PATH` - Paths to events _("//Paths/To/Events/")_
We do not recommend you this way.
</TabItem>
</Tabs>
:::
:::caution
We do not recommend setting your ftrack user and api key environments in a persistent way, for security reasons. Option 1. passing them as arguments is substantially safer.
:::
### Where to run event server
We recommend you to run event server on stable server machine with ability to connect to Avalon database and Ftrack web server. Best practice we recommend is to run event server as service.
:::important
Event server should **not** run more than once! It may cause big pipeline issues.
:::
### Which user to use
- must have at least `Administrator` role
- same user should not be used by an artist
### Run Linux service - step by step
1. create file:
`sudo vi /opt/pype/run_event_server.sh`
2. add content to the file:
```sh
export PYPE_DEBUG=3
pushd /mnt/pipeline/prod/pype-setup
. pype eventserver --ftrack-user <pype-admin-user> --ftrack-api-key <api-key>
```
3. create service file:
`sudo vi /etc/systemd/system/pype-ftrack-event-server.service`
4. add content to the service file
```toml
[Unit]
Description=Run Pype Ftrack Event Server Service
After=network.target
[Service]
Type=idle
ExecStart=/opt/pype/run_event_server.sh
Restart=on-failure
RestartSec=10s
[Install]
WantedBy=multi-user.target
```
5. change file permission:
`sudo chmod 0755 /etc/systemd/system/pype-ftrack-event-server.service`
6. enable service:
`sudo systemctl enable pype-ftrack-event-server`
7. start service:
`sudo systemctl start pype-ftrack-event-server`
* * *
## Ftrack events
Events are helpers to automation. They react to Ftrack Web Server events like change entity attribute, create of entity, etc. .
### Delete Avalon ID from new entity _(DelAvalonIdFromNew)_
Is used to remove value from `Avalon/Mongo Id` Custom Attribute when entity is created.
`Avalon/Mongo Id` Custom Attribute stores id of synchronized entities in pipeline database. When user _Copy -> Paste_ selection of entities to create similar hierarchy entities, values from Custom Attributes are copied too. That causes issues during synchronization because there are multiple entities with same value of `Avalon/Mongo Id`. To avoid this error we preventively remove these values when entity is created.
### Next Task update _(NextTaskUpdate)_
Change status of next task from `Not started` to `Ready` when previous task is approved.
Multiple detailed rules for next task update can be configured in the presets.
### Synchronization to Avalon database _(Sync_to_Avalon)_
Automatic [synchronization to pipeline database](manager_ftrack#synchronization-to-avalon-database).
This event updates entities on their changes Ftrack. When new entity is created or existing entity is modified. Interface with listing information is shown to users when [synchronization rules](manager_ftrack#synchronization-rules) are not met. This event may also undo changes when they might break pipeline. Namely _change name of synchronized entity_, _move synchronized entity in hierarchy_.
:::important
Deleting an entity by Ftrack's default is not processed for security reasons _(to delete entity use [Delete Asset/Subset action](manager_ftrack_actions#delete-asset-subset))_.
:::
### Synchronize hierarchical attributes _(SyncHierarchicalAttrs)_
Auto-synchronization of hierarchical attributes from Ftrack entities.
Related to [Synchronize to Avalon database](#synchronization-to-avalon-database) event _(without it, it makes no sense to use this event)_. Hierarchical attributes must be synchronized with special way so we needed to split synchronization into 2 parts. There are [synchronization rules](manager_ftrack#synchronization-rules) for hierarchical attributes that must be met otherwise interface with messages about not meeting conditions is shown to user.
### Thumbnails update _(ThumbnailEvents)_
Updates thumbnail of Task and it's parent when new Asset Version with thumbnail is created.
This is normally done by Ftrack Web server when Asset Version is created with Drag&Drop but not when created with Ftrack API.
### Version to Task status _(VersionToTaskStatus)_
Updates Task status based on status changes on it's `AssetVersion`.
The issue this solves is when Asset version's status is changed but the artist assigned to Task is looking at the task status, thus not noticing the review.
This event makes sure statuses Asset Version get synced to it's task. After changing a status on version, this event first tries to set identical status to version's parent (usually task). At this moment there are a few more status mappings hardcoded into the system. If Asset version's status was changed to:
- `Reviewed` then Task's status will be changed to `Change requested`
- `Approved` then Task's status will be changed to `Complete`
### Update First Version status _(FirstVersionStatus)_
This event handler allows setting of different status to a first created Asset Version in ftrack.
This is usefull for example if first version publish doesn't contain any actual reviewable work, but is only used for roundtrip conform check, in which case this version could receive status `pending conform` instead of standard `pending review`
Behaviour can be filtered by `name` or `type` of the task assigned to the Asset Version. Configuration can be found in [ftrack presets](admin_presets_ftrack#first_version_status-dict)
* * *
## Credentials
If you want to be able use Ftrack actions with Pype tray or [event server](#event-server) you need to enter credentials. The credentials required for Ftrack are `Username` and `API key`.
### Credentials in tray
How to handle with credentials in tray is described [here](#artist_ftrack#first-use-best-case-scenario).
### Credentials in event server
How to enter credentials to event server is described [here](#how-to-run-event-server).
### Where to find API key
Please check the [official documentation](http://ftrack.rtd.ftrack.com/en/backlog-scaling-ftrack-documentation-story/developing/api_keys.html).

260
website/docs/admin_hosts.md Normal file
View file

@ -0,0 +1,260 @@
---
id: admin_hosts
title: Hosts Setup
sidebar_label: Hosts Setup
---
## Host configuration
To add new host application (for example new version of Autodesk Maya, etc.) just follow these steps:
### Launchers
You can find **launchers** in `repos/pype-config`. You can notice there is a bunch of **[TOML](https://en.wikipedia.org/wiki/TOML)** files and Linux and Windows shell scripts in their respective folders. **TOML** file
holds basic metadata information for host application. Their naming convention is important and follow this pattern:
```fix
app_name[_version].toml
```
for example `maya_2020.toml` or `nuke_11.3.toml`. More about it later. For now, lets look on content of one of these files:
```toml
executable = "unreal"
schema = "avalon-core:application-1.0"
application_dir = "unreal"
label = "Unreal Editor 4.24"
ftrack_label = "UnrealEditor"
icon ="ue4_icon"
launch_hook = "pype/hooks/unreal/unreal_prelaunch.py/UnrealPrelaunch"
ftrack_icon = '{}/app_icons/ue4.png'
```
* `executable` - specifies name (without extension) of shell script launching application (in windows/linux/darwin folders)
* `schema` - not important, specifying type of metadata
* `application_dir` - this specifies name of folder used in **app** key in [anatomy templates](admin_config#anatomy)
* `label` - name of application to show in launcher
* `ftrack_label` - name under which this application is show in ftrack actions (grouped by)
* `icon` - application icon used in avalon launcher
* `launch_hook` - path to Python code to execute before application is started (currently only from ftrack action)
* `ftrack_icon` - icon used in ftrack
### Environments
You can modify environment variables for you application in `repos/pype-config/environments`. Those files are
[JSON](https://en.wikipedia.org/wiki/JSON) files. Those file are loaded and processed in somewhat hierarchical way. For example - for Autodesk Maya 2020, first file named `maya.json` is processed and then `maya_2020.json` is. Syntax is following:
```json
{
"VARIABLE": "123",
"NEXT_VARIABLE": "{VARIABLE}4",
"PLATFORMS": {
"windows": "set_on_windows",
"linux": "set_on_linux",
"darwin": "set_on_max"
},
"PATHS: [
"paths/1", "path/2", "path/3"
]
}
```
This will result on windows in environment with:
```sh
VARIABLE="123"
NEXT_VARIABLE="1234"
PLATFORMS="set_on_windows"
PATHS="path/1;path/2;path/3"
```
### Ftrack
You need to add your new application to ftrack so it knows about it. This is done in System Preferences of
ftrack in `Advanced:Custom Attributes`. There you can find `applications` attribute. It looks like this:
![Ftrack - custom attributes - applications](assets/ftrack/ftrack-custom_attrib_apps.jpg)
Menu/value consists of two rows per application - first row is application name and second is basically filename of this **TOML** file mentioned above without `.toml` extension. After you add or modify whatever you need here, you need to add you new application to project in ftrack. Just open project Info in ftrack, find out
**Applications** and add your new application there. If you are running [event server](admin_ftrack#event-server) then this information is synced to avalon automatically. If not, you need to sync it manually by running **Sync to Avalon** action.
Now, restart Pype and your application should be ready.
### Conclusion
To wrap it up:
- create your shell scripts to launch application (don't forget to set correct OS permissions)
- create **TOML** file pointing to shell scripts, set you icons and labels there
- check or create you environment **JSON** file in `environments` even if it is empty (`{}`)
- to make it work with ftrack, modify **applications** in *Custom Attributes*, add it to your project and sync
- restart Pype
## Autodesk Maya
[Autodesk Maya](https://www.autodesk.com/products/maya/overview) is supported out of the box and doesn't require any special setup. Even though everything should be ready to go from the start, here is the checklist to get pype running in Maya
1. Correct executable in launchers as explained in [here](admin_config#launchers)
2. Pype environment variable added to `PYTHONPATH` key in `maya.json` environment preset.
```json
{
"PYTHONPATH": [
"{PYPE_ROOT}/repos/avalon-core/setup/maya",
"{PYPE_ROOT}/repos/maya-look-assigner"
]
}
```
## Foundry Nuke
[Foundry Nuke](https://www.foundry.com/products/nuke) is supported out of the box and doesn't require any special setup. Even though everything should be ready to go from the start, here is the checklist to get pype running in Nuke
1. Correct executable in launchers as explained in [here](admin_config#launchers)
2. Following environment variables in `nuke.json` environment file. (PYTHONPATH might need to be changed in different studio setups)
```json
{
"NUKE_PATH": [
"{PYPE_ROOT}/repos/avalon-core/setup/nuke/nuke_path",
"{PYPE_MODULE_ROOT}/setup/nuke/nuke_path",
"{PYPE_STUDIO_PLUGINS}/nuke"
],
"PYPE_LOG_NO_COLORS": "True",
"PYTHONPATH": {
"windows": "{VIRTUAL_ENV}/Lib/site-packages",
"linux": "{VIRTUAL_ENV}/lib/python3.6/site-packages"
}
}
```
## AWS Thinkbox Deadline
To support [AWS Thinkbox Deadline](https://www.awsthinkbox.com/deadline) you just need to:
1. enable it in **init_env** key of your `deploy.json` file:
```json
{
"PYPE_CONFIG": "{PYPE_ROOT}/repos/pype-config",
"init_env": ["global", "avalon", "ftrack", "deadline"]
}
```
2. Edit `repos/pype-config/environments/deadline.json` and change `DEADLINE_REST_URL` to point to your Deadline Web API service.
3. Set up *Deadline Web API service*. For more details on how to do it, see [here](https://docs.thinkboxsoftware.com/products/deadline/10.0/1_User%20Manual/manual/web-service.html).
### Pype Dealine supplement code
There is some code needed to be installed on Deadline repository. You can find this repository overlay in
`pype-setup/vendor/deadline`. This whole directory can be copied to your existing deadline repository.
Currently there is just **GlobalJobPreLoad.py** script taking care of path remapping in case of multiplatform
machine setup on farm. If there is no mix of windows/linux machines on farm, there is no need to use this.
## Virtual Vertex Muster
Pype supports rendering with [Muster](https://www.vvertex.com/). To enable it:
1. Add `muster` to **init_env** to your `deploy.json`
file:
```json
{
"PYPE_CONFIG": "{PYPE_ROOT}/repos/pype-config",
"init_env": ["global", "avalon", "ftrack", "muster"]
}
```
2. Configure URL to Muster Web API - in `repos/pype-config/environments/muster.json`. There you need to set `MUSTER_REST_URL` to correct value.
3. Enabled muster in [tray presets](admin_presets_tools##item_usage-dict)
#### Template mapping
For setting up muster templates have a look at [Muster Template preset](admin_presets_tools#muster-templates)
:::note
User will be asked for it's Muster login credentials during Pype startup or any time later if its authentication token expires.
:::
## Clockify
[Clockify](https://clockify.me/) integration allows pype users to seamlessly log their time into clockify in the background. This in turn allow project managers to have better overview of all logged times with clockify dashboards and analytics.
1. Enable clockify, add `clockify` to **init_env** in your `deploy.json`
file:
```json
{
"PYPE_CONFIG": "{PYPE_ROOT}/repos/pype-config",
"init_env": ["global", "avalon", "ftrack", "clockify"]
}
```
2. Configure your clockify workspace. In `repos/pype-config/environments/clockify.json`, you need to change `CLOCKIFY_WORKSPACE` to the correct value
```json
{
"CLOCKIFY_WORKSPACE": "test_workspace"
}
```
3. Enabled Clockify in [tray presets](admin_presets_tools##item_usage-dict)
:::note
User will be asked for it's Clockify login credentials during Pype startup.
:::
## Unreal Editor
Pype supports [Unreal](https://www.unrealengine.com/). This support is currently tested only on Windows platform.
You can control Unreal behavior by editing `repos/pype-config/presets/unreal/project_setup.json`:
```json
{
"dev_mode": false,
"install_unreal_python_engine": false
}
```
Setting `dev_mode` to **true** will make all new projects created on tasks by pype C++ projects. To work with those,
you need [Visual Studio](https://visualstudio.microsoft.com/) installed.
`install_unreal_python_engine` will install [20tab/UnrealEnginePython](https://github.com/20tab/UnrealEnginePython) as plugin
in new project. This implies `dev_mode`. Note that **UnrealEnginePython** is compatible only with specific versions of Unreal Engine (usually not with the latest one). This plugin is not needed but can be used along *"standard"* python support in Unreal Engine to
extend Pype or Avalon functionality.
### Unreal Engine version detection
Pype is trying to automatically find installed Unreal Engine versions. This relies on [Epic Games Launcher](https://www.epicgames.com/store/en-US/).
If you have custom install location (for example you've built your own version from sources), you can set
`UNREAL_ENGINE_LOCATION` to point there. Pype then tries to find UE version in `UE_x.xx` subfolders.
### Avalon Unreal Integration plugin
Avalon/Pype integration needs [Avalon Unreal Integration Plugin](https://github.com/pypeclub/avalon-unreal-integration). Use `AVALON_UNREAL_PLUGIN` environment variable to point to it. When new
UE project is created, file are copied from this directory to project `Plugins`. If Pype detects that plugin
isn't already built, it will copy its source codes to new project and force `dev_mode`. In that case, you need
**Visual Studio** to compile the plugin along with the project code.
### Dependencies
Pype integration needs:
* *Python Script Plugin enabled* (done automatically)
* *Editor Scripting Utilities* (done automatically)
* *PySide* installed in Unreal Python 2 (or PySide2/PyQt5 if you've build Unreal Editor with Python 3 support) (done automatically)
* *Avalon Unreal Integration plugin* ([sources are on GitHub](https://github.com/pypeclub/avalon-unreal-integration))
* *Visual Studio 2017* is needed to build *Avalon Unreal Integration Plugin* and/or if you need to work in `dev_mode`
### Environment Variables
- `AVALON_UNREAL_EDITOR` points to Avalon Unreal Integration Plugin sources/build
- `UNREAL_ENGINE_LOCATION` to override Pype autodetection and point to custom Unreal intallation
- `PYPE_UNREAL_ENGINE_PYTHON_PLUGIN` path to [20tab/UnrealEnginePython](https://github.com/20tab/UnrealEnginePython) optional plugin

View file

@ -0,0 +1,430 @@
---
id: admin_install
title: Pype Setup
sidebar_label: Pype Setup
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Introduction
The general approach to pype deployment is installing central repositories on a shared network storage which can be accessed by all artists in the studio. Simple shortcuts to launchers are then distributed to all workstations for artists to use. This approach ensures easy maintenance and updates.
When artist first runs pype all the required python packages get installed automatically to his local workstation and updated everytime there is a change in the central installation.
:::note
Automatic workstation installation and updates will not work in offline scenarios. In these case `pype install --force --offline` command must be triggered explicitly on the workstation
:::
## Requirements
### Python 3.6+
Pype requires Python 3.6 or later to be installed on each workstation running Pype.
:::note
If you want to use pype with Blender, you need to upgrade your python to 3.7 or higher.
:::
Windows version of Python can be easily grabbed at [python.org](https://www.python.org/downloads/). Install location doesn't matter but
python executable should be in `PATH` environment variable.
:::important Linux
On linux it is somehow different and all depends on linux distribution in use.
Some linux variants (for example *Ubuntu*) need **python-dev** variant of python package that includes python headers and developer tools. This is needed because some of **Pype** requirements need to compile themselves against python during their installation. Please, refer to your distribution community to find out how to install that package.
:::
<Tabs
groupId="platforms"
defaultValue="win"
values={[
{label: 'Windows', value: 'win'},
{label: 'Linux', value: 'linux'},
{label: 'Mac', value: 'mac'},
]}>
<TabItem value="win">
```sh
sudo yum group install "Development Tools"
```
Python 3.6 is not part of official distribution. Easiest way is to add it with the help of *SCL* - Software Collection project.
This has advantage that it won't replace system version of python.
```sh
sudo yum update
sudo yum install centos-release-scl
```
Now you can install python itself:
```sh
sudo yum install rh-python36
```
To be able to use installed version of python, you must enable it in shell:
```sh
scl enable rh-python36 bash
```
This will enable python 3.6 in currently running bash only!
Check it with:
```sh
python --version
```
</TabItem>
<TabItem value="linux">
```sh
sudo apt install build-essential
```
Some versions of Ubuntu already has python 3.6 installed, check it with:
```sh
python3 --version
```
If python shows lower version then required, use:
```
sudo apt-get update
sudo apt-get install python3-dev
```
Please be aware that even if your system already has python 3.6, than if that
didn't come from **python3-dev** package, Pype will most likely fail to install
it's dependencies.
</TabItem>
</Tabs>
:::note Override Python detection
You can override autodetection of Python. This can be useful if you want to use central network python location or some other custom location. Just set `PYPE_PYTHON_EXE` environment variable to point where you need.
:::
--------------
### MongoDB
Pype needs site-wide installation of **MongoDB**. It should be installed on
reliable server all workstations (and possibly render nodes) can connect. This
server holds **Avalon** database that is at the core of everything, containing
very important data, so it should be backed up often and if high-availability is
needed, *replication* feature of **MongoDB** should be considered. This is beyond the
scope of this documentation, please refer to [MongoDB Documentation](https://docs.mongodb.com/manual/replication/).
Pype can run it's own instance of **mongodb**, mostly for testing and development purposes.
For that it uses locally installed **MongoDB**.
Download it from [mognoDB website](https://www.mongodb.com/download-center/community), install it and
add to the `PATH`. On Windows, Pype tries to find it in standard installation destination or using `PATH`.
To run **mongoDB** on server, use your server distribution tools to set it up (on Linux).
### Git
To be able to deploy Pype, **git** is need. It will clone all required repositories and
control versions so future updates are easier. Git is however only requirent on admin workstation for global studio updates.
See [how to install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git).
To access private repositories, you'll need other optional stuff like ssh key agents, etc.
### PowerShell (on Windows only)
PowerShell is now included in recent versions of Windows. **Pype** requires at least
version 5.0, included in Windows 10 from beginning and available for Windows 7 SP1,
Windows 8.1 and Windows Server 2012.
If you want to know what version of PowerShell are you running, execute in PowerShell prompt:
```powershell
$PSVersionTable
```
If you need to install PowerShell or update it, please refer to:
[Installing powershell on windows](https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-6)
### Xcode CLT (on Mac only)
Pype need **Xcode Command Line Tools** installed to provide its tools and be able to install its dependencies via pythons `pip` command. Those will be downloaded
and installed automatically if needed.
### Other
:::warning Linux headless server
If you need to run Pype's **ftrack event server** on headless linux server, be aware that due Qt dependencies, you'll need to install OpenGL support there even if server doesn't have any real use for it.
:::
## Studio Setup
### Pype location
Before you install Pype, first clone **pype-setup** repository to place you want Pype to be. In studio setting
you probably want that destination to be on shared network drive so all users can access it.
:::tip production and development branch
We recommend to maintain two *versions* of Pype. The first is **production** branch - the one your artists use everyday for their work. The second one is **development** version you should use for testing new features and as development sandbox. Development branch can point to a different Avalon database and use its own **ftrack event server**. More on that in [Pype Configuration](admin_config)
```text
Shared Network Drive
├─── pype
├─── prod
└─── dev
```
To prepare this structure, you can use:
```sh
cd /shared_drive/pype
git clone --tag 2.4.0 https://bitbucket.com/pypeclub/pype-setup.git prod
git clone --tag 2.4.0 https://bitbucket.com/pypeclub/pype-setup.git dev
```
:::
Specify your version after `--branch` or `--tag` option.
:::note
It is possible to distinguish `dev` and `prod` by changing Pype icon color to orange. To do so, you have to create `config.ini` file with content:
```text
[DEFAULT]
dev=true
```
And put the file to:
```text
Shared Network Drive
├─── dev
└─── pype-setup
└───pypeapp
└───config.ini
```
:::
:::note
You should always use tags to checkout to get specific release, otherwise you end up with *develop* branch that can be unstable.
:::
:::warning
By default, both branches will use the same virtual environment. Be careful when modifying your requirements in **dev** version because then it will influence **prod** version as well. To be safe, modify `PYPE_ENV` environment variable before using **dev** Pype commands.
:::
### Installation
<Tabs
groupId="platforms"
defaultValue="win"
values={[
{label: 'Windows', value: 'win'},
{label: 'Linux', value: 'linux'},
{label: 'Mac', value: 'mac'},
]}>
<TabItem value="win">
To install Pype you need to get first to it's root directory in powershell.
If you have Pype on network, you should mount it as drive and assign it some consistent letter. You can also mount this network drive via junction point feature. As admin run from shell:
```sh
mklink /d "C:\pipeline" "\\server\pipeline"
```
Then your network drive will be available transparently at `C:\pipeline`.
```sh
cd Z:\pype\production\pype-setup
```
-----
Now you can run installation itself:
```sh
.\pype.bat install
```
</TabItem>
<TabItem value="linux">
To install pype you first need get to it's root directory in bash shell.
If you Pype location is on network drive, you should add it to `/etc/fstab` to
mount it automatically during system startup.
```sh
cd /location/of/pype
```
**Pype** can be installed with the following command:
```sh
. pype install
```
On linux it is necessary to adjust user permissions to `/opt/pype/pype_env2` or whatever you set in `PYPE_ENV` and for that you need to be **root**.
#### Mounting network drives
If you Pype location is on network drive, you need to mount it first. Here are the steps to do it and make it so it re-mounts automatically after your computer restarts:
1) in Finder press **Command+K**
2) enter path `smb://server/pipeline` and hit **Connect**
3) enter login and password
4) network drive is now mounted under `/Volumes/pipeline`
5) now go to **System Preferences**
6) click **Users & Groups -> Login Items**
7) click + and select mounted drive and click **Add**
</TabItem>
<TabItem value="mac">
To install Pype on Mac, you need to have Administrator privileges. There are also further requirements if you need to deploy repositories.
Run **Terminal**. Run following commands:
```sh
sudo -s
cd /Volumes/pipeline/pype/production/pype-setup
./pype install
```
`sudo -s` will take you to elevated privileges and you need to enter your password. `cd` will change directory to where you have pype located and `pype install` will run installation.
If there are then warnings about some directory not owned by current user, you can fix it with following commands:
```sh
chown -R yourusername /usr/local/pype
chown -R yourusername /Users/yourusername/Library/Caches/pip
```
Switch `yourusername` for your user name :)
</TabItem>
</Tabs>
What it basically does is:
1) Create python virtual environment on path: `C:\Users\Public\pype_env2` on Windows or `/opt/pype/pype_env2` on Linux or `/usr/local/pype/pype_env2` on Mac.
This can be overridden by setting `PYPE_ENV` to different path. Logic behind this is that this directory on Windows can be shared
between users on one machine - it only stores Pype dependencies, not any personal setting or credentials.
2) Then it will install all python dependencies defined in `pypeapp\requirements.txt` into this virtual environment.
Default installation will use your internet connection to download all necessary requirements.
#### Offline installation
You can also install Pype in offline scenarios:
```sh
pype install --offline
```
This will use dependencies downloaded into `pype-setup/vendor/packages` rather than pulling directly from the internet. Those packages must, however, first be
downloaded on a machine connected to the internet using:
```sh
pype download
```
:::warning multiple platforms
`pype download` will only download packages for currently running platform. So if you run it on Windows machine, only windows packages get downloaded (along with many universal ones). If you then run `pype install --offline` on Linux machine, it will probably fail as Linux specific packages will be missing. In multiplatform environments we recommend to run `pype download` on all used platform to combine all necessary packages into `vendor/packages`.
:::
:::caution multiplatform caveat
There can be problems with libraries compatibility, when using multiplatform environments. For example if using **PyQt 5.12**, there seems to be no problem on Windows, but using it on **Centos Linux 7** will cause problems because Centos ships with some older dependent libraries that will not work with aforementioned PyQt version.
:::
#### Forcing Installation
Sometime it is necessary to force re-install Pype environment. To do this:
```sh
pype install --force
```
or
```sh
pype install --force --offline
```
in offline scenarios.
This is useful if Pype is misbehaving as first line of debugging. You can of course just manually delete `PYPE_ENV` directory and run `pype install` again.
### Deployment
After Pype is cloned and installed, it is necessary to *deploy* all repositories used by Pype. This must be done on a computer with
Internet access.
```sh
pype deploy
```
This command will process all repositories specified in `deploy/deploy.json` and clone them into `repos/` directory.
```sh
pype deploy --force
```
will deploy repositories, overwriting existing ones if they exists and setting them to state specified in *deploy.json*.
:::note customizing deployment
You can customize your deployment to some extent. Everything specified in `deploy/deploy.json` is considered as default and can be overridden by creating your own *deploy.json* in sub directory.
```text
pype
├─── pypeapp
├─── deploy
│ ├─── deploy.json
│ ├─── deploy_schema-1.0.json
│ ├─── my_studio_override
│ │ ├─── deploy.json
│ │ └─── deploy_schema-1.0.json
│ ...
...
```
In such configuration, `deploy/my_studio_override/deploy.json` will take precedence over the default one.
:::
To validate if Pype deployment is ok, run:
```sh
pype validate
```
#### Structure of `deploy.json`
There are a few features in `deploy.json` that needs to be explained in further detail.
Here is a list of keys used and their function:
- `PYPE_CONFIG` - path to Pype configuration repository.
- `init_env` - these are environment files in Pype configuration repository that
are loaded immediately after Pype starts. They define basic functionality.
```js
"init_env": ["global", "avalon", "ftrack", "deadline"]
```
For example, if you don't use *Deadline* but you need *Muster* support, change `deadline` to `muster`.
Pype will then load `{PYPE_CONFIG}/environments/muster.json` and set environment variables there.
- `repositories`: this is list of repositories that will be deployed to `repos/`. There are few options
for each repository:
- `name`: name of repository will be used as directory name
- `url`: url of the git repository
- `branch` or `tag`: this specify either branch - it's *HEAD* will be checked out, or
`tag` - commit tagged with specified tag will be checked out.
- `pip`: these are additional dependencies to be installed by *pip* to virtual environment.
- `archive_files`: archive files to be unpacked to somewhere. For example ffmpeg installation or
anything else we need to extract during deployment to some place.
- `extract_path`: path to where this archive should be extracted
- `url` or `vendor`: this is url of source to be downloaded or name in `vendor/packages` to be Used
- `md5_url` optional url for md5 file to validate checksum of downloaded file
- `skip_first_subfolder` will move everything inside first directory in archive to `extract_path`.
#### Offline Deployment
In offline scenarios it is up to you to replicate what `pype deploy` does. The easiest way
to go is to run `pype deploy` on machine, get everything in `repos/` and move it to your studio install location:
```sh
cd pype-setup
tar cvzf pype_repos.tgz repos/
```
do the same for things deployed to *vendor*.

View file

@ -0,0 +1,71 @@
---
id: admin_getting_started
title: Getting Started
sidebar_label: Getting Started
---
## Introduction
**Pype** is part of a larger ecosystem of tools build around [avalon](https://github.com/getavalon/core) and [pyblish](https://github.com/pyblish/pyblish-base).
To be able to use it, you need those tools and set your environment. This
requires additional software installed and set up correctly on your system.
Fortunately this daunting task is handled for you by **Pype Setup** package itself. **Pype** can
install most of its requirements automatically but a few more things are needed in
various usage scenarios.
## Software requirements
- **Python 3.7+** (Locally on all workstations)
- **PowerShell 5.0+** (Windows only)
- **Bash** (Linux only)
- **MongoDB** (Centrally accessible)
There are other requirements for different advanced scenarios. For more
complete guide please refer to [Pype Setup page](admin_install).
## Hardware requirements
Pype should be installed centrally on a fast network storage with at least read access right for all workstations and users in the Studio. Full Deplyoyment with all dependencies and both Development and Production branches installed takes about 1GB of data, however to ensure smooth updates and general working comfort, we recommend allocating at least at least 4GB of storage dedicated to PYPE deployment.
For well functioning ftrack event server, we recommend a linux virtual server with Ubuntu or Centos OS. CPU and RAM allocation need differ based on the studio size, but a 2GB of ram, with a dual core CPU and around 4GB of storage should suffice
## Central repositories
### Pype-setup
Pype-Setup is the glue that binds Avalon, Pype and the Studio together. It is essentially a wrapper application that manages requirements, installation, all the environments and runs all of our standalone tools.
It has two main interfaces. `Pype` CLI command for all admin level tasks and a `Pype Tray` application for artists. Documentation for the `Pype` command can be found [here](admin_pype_commands)
This is also the only repository that needs to be downloaded by hand before full pype deployment can take place.
### Pype
Pype is our "Avalon Config" in Avalon terms that takes avalon-core and expands on it's default features and capabilities. This is where vast majority of the code that works with your data lives.
Avalon gives us the ability to work with a certain host, say Maya, in a standardised manner, but Pype defines **how** we work with all the data. You can think of it as. Avalon by default expects each studio to have their own avalon config, which is reasonable considering all studios have slightly different requirements and workflows. We abstracted a lot of this customisability out of the avalon config by allowing pype behaviour to be altered by a set of .json based configuration files and presets.
Thanks to that, we are able to maintain one codebase for vast majority of the features across all our clients deployments while keeping the option to tailor the pipeline to each individual studio.
### Avalon-core
Avalon-core is the heart and soul of Pype. It provides the base functionality including GUIs (albeit expanded modified by us), database connection and maintenance, standards for data structures and working with entities and a lot of universal tools.
Avalon is being very actively developed and maintained by a community of studios and TDs from around the world, with Pype Club team being an active contributor as well.
## Studio Specific Repositories
### Pype-Config
Pype_config repository need to be prepared and maintained for each studio using pype and holds all of their specific requiremens for pype. Those range from naming conventions and folder structures (in pype referred to as `project anatomy`), through colour management, data preferences, all the way to what individual validators they want to use and what they are validating against.
Thanks to a very flexible and extensible system of presets, we're almost always able to accommodate client requests for modified behaviour by introducing new presets, rather than permanently altering the main codebase for everyone.
### Studio-Project-Configs
On top of studio wide pype config, we support project level overrides for any and all avriables and presets available in the main studio config.
### Studio-Project-Scrips

View file

@ -0,0 +1,119 @@
---
id: admin_presets_ftrack
title: Presets > Ftrack
sidebar_label: Ftrack
---
## PROJECT_DEFAULTS.json
path: `pype-config/presets/ftrack/project_defauls.json`
A list of all project defaults to be set when you run "Ftrack Prepare Project"
```json
{
"fps": 25,
"frameStart": 1001,
"frameEnd": 1100,
"clipIn": 1001,
"clipOut": 1100,
"handleStart": 10,
"handleEnd": 10,
"resolutionHeight": 1080,
"resolutionWidth": 1920,
"pixelAspect": 1.0,
"applications": [
"maya_2019", "nuke_11.3", "nukex_11.3", "nukestudio_11.3", "deadline"
],
"tools_env": [],
"avalon_auto_sync": true
}
```
## FTRACK_CONFIG.json
path: `pype-config/presets/ftrack/ftrack_config.json`
### `sync_to_avalon` [dict]
list of statuses that allow moving, deleting and changing of names on ftrack entities. Once any child of and entity is set to a status different than those listed in this list, it is considered to have been worked on and will not allow any major changes to hierarchy any more.
`statuses_name_change [list]`:
```json
{
"sync_to_avalon": {
"statuses_name_change": ["not ready", "ready"]
}
}
```
### `status_update` [dict]
mapping of status for automatic updates.
Key specifies the resulting status and value is a list of statuses from which we allow changing to the target status.
`_ignore_` [list]: source statuses to ignore
`target_status` [list]: target
```json
{
"status_update": {
"_ignore_": ["in progress", "ommited", "on hold"],
"Ready": ["not ready"],
"In Progress" : ["_any_"]
}
}
```
### `status_version_to_task` [dict]
mapping of status that propagate automatically from published version to it's task. By default we search for identical status, however this preset let's you remap between different statuses on versions and tasks.
`status_version_to_task` [dict]:
```json
{
"status_version_to_task": {
"__description__": "Status `from` (key) must be lowered!",
"in progress": "in progress",
"approved": "approved"
}
}
```
## SERVER.json
path: `pype-config/presets/ftrack/server.json`
### `first_version_status` [dict]
`task_status_map` [list]: List of dictionaires specifying individual mappings
`status` [string]: status to set if `key` and `name` match.
`name` [string]: name of task or task's type.
`key` [enumerator]: _optional_ specify where to look for name. There are two possible value:
1. `task`: task's name (default)
2. `task_type`: task type's name
It doesn't matter if values are lowered or capitalized.
```json
{
"FirstVersionStatus": {
"task_status_map": [{
"key": "task",
"name": "compositing",
"status": "Blocking"
}, {
"MORE ITEMS...": "MORE VALUES..."
}]
},
"...": "{...}"
}
```

View file

@ -0,0 +1,143 @@
---
id: admin_presets_maya
title: Presets > Maya
sidebar_label: Maya
---
## CAPTURE.json
path: `pype-config/presets/maya/capture.json`
All the viewport settings for maya playblasts.
### `Codec` [dict] ###
```python
"Codec": {
"compression": "jpg",
"format": "image",
"quality": 95
}
```
### `Display Options` [dict] ###
```python
"Display Options": {
"background": [
0.7137254901960784,
0.7137254901960784,
0.7137254901960784
],
"backgroundBottom": [
0.7137254901960784,
0.7137254901960784,
0.7137254901960784
],
"backgroundTop": [
0.7137254901960784,
0.7137254901960784,
0.7137254901960784
],
"override_display": true
}
```
### `Generic` [dict] ###
```python
"Generic": {
"isolate_view": true,
"off_screen": true
},
```
### `IO` [dict] ###
```python
"IO": {
"name": "",
"open_finished": false,
"raw_frame_numbers": false,
"recent_playblasts": [],
"save_file": false
},
```
### `PanZoom` [dict] ###
```python
"PanZoom": {
"pan_zoom": true
},
```
### `Viewport Options` [dict] ###
```python
"Viewport Options": {
"cameras": false,
"clipGhosts": false,
"controlVertices": false,
"deformers": false,
"dimensions": false,
"displayLights": 0,
"dynamicConstraints": false,
"dynamics": false,
"fluids": false,
"follicles": false,
"gpuCacheDisplayFilter": false,
"greasePencils": false,
"grid": false,
"hairSystems": false,
"handles": false,
"high_quality": true,
"hud": false,
"hulls": false,
"ikHandles": false,
"imagePlane": false,
"joints": false,
"lights": false,
"locators": false,
"manipulators": false,
"motionTrails": false,
"nCloths": false,
"nParticles": false,
"nRigids": false,
"nurbsCurves": false,
"nurbsSurfaces": false,
"override_viewport_options": true,
"particleInstancers": false,
"pivots": false,
"planes": false,
"pluginShapes": false,
"polymeshes": true,
"shadows": false,
"strokes": false,
"subdivSurfaces": false,
"textures": false,
"twoSidedLighting": true
}
```
## Maya instance scene types
It is possible to set when to use `.ma` or `.mb` for:
- camera
- setdress
- layout
- model
- rig
- yetiRig
Just put `ext_mapping.json` into `presets/maya`. Inside is simple mapping:
```JSON
{
"rig": "mb",
"camera": "mb"
}
```
*Note that default type is `ma`*

View file

@ -0,0 +1,58 @@
---
id: admin_presets_nukestudio
title: Presets > NukeStudio
sidebar_label: Nukestudio
---
## TAGS.json
path: `pype-config/presets/nukestudio/tags.json`
Each tag defines defaults in `.json` file. Inside of the file you can change the default values as shown in the example (`>>>"1001"<<<`). Please be careful not to alter the `family` value.
```python
"Frame start": {
"editable": "1",
"note": "Starting frame for comps",
"icon": {
"path": "icons:TagBackground.png"
},
"metadata": {
"family": "frameStart",
"number": >>>"1001"<<<
}
}
```
## PUBLISH.json
path: `pype-config/presets/plugins/nukestudio/publish.json`
### `CollectInstanceVersion` [dict] ###
This plugin is set to `true` by default so it will synchronize version of published instances with the version of the workfile. Set `enabled` to `false` if you wish to let publishing process decide on the next available version.
```python
{
"CollectInstanceVersion": {
"enabled": false
}
}
```
### `ExtractReviewCutUpVideo` [dict] ###
path: `pype-config/presets/plugins/nukestudio/publish.json`
Plugin is responsible for cuting shorter or longer source material for review. Here you can add any aditional tags you wish to be added into extract review process.
The plugin generates reedited intermediate video with handless even if it has to add empty black frames. Some productions prefer to use review material without handless so in the example, `no-handles` are added as tags. This allow furter review extractor to publish review without handles, without affecting other outputs.
```python
{
"ExtractReviewCutUpVideo": {
"tags_addition": ["no-handles"]
}
}
```

View file

@ -0,0 +1,592 @@
---
id: admin_presets_plugins
title: Presets > Plugins
sidebar_label: Plugins
---
## Global
### publish.json
Each plugin in the json should be added as name of the class. There are some default attributes recommended to use in case you wish a plugin to be switched off for some projects in `project overwrites` like `enabled: false`. So for example if you wish to switch off plugin class name `PluginName(pyblish.api.contextPlugin)` if file `name_of_plugin_file.py`, it could be done only by adding following text into root level of publish.json file:
```json
{
"PluginName": {
"enabled": false
}
}
```
### `ExtractReview`
Plugin responsible for automatic FFmpeg conversion to variety of formats.
Supported extensions for both input and output: `["exr", "jpg", "jpeg", "png", "dpx", "mov", "mp4"]`
**ExtractReview** creates new representations based on presets and representations in instance. Preset should contain only one attribute **"profiles"** which is list of profile items. Each profile item has **outputs**, where definitions of possible outputs are, and may have specified filters for **hosts**, **tasks** and **families**.
#### Profile filters
As mentioned above you can define multiple profiles for different contexts. Profile with filters matching current context the most is used. You can define profile without filters and use it as **default**. Only **one or none** profile is processed per instance.
All context filters are lists which may contain strings or Regular expressions (RegEx).
- **hosts** - Host from which publishing was triggered. `["maya", "nuke"]`
- **tasks** - Currently processed task. `["[Cc]ompositing", "[Aa]nimation"]`
- **families** - Main family of processed instance. `["plate", "model"]`
:::important Filtering
Filters are optional and may not be set. In case when multiple profiles match current context, profile with filters has higher priority that profile without filters.
:::
#### Profile outputs
Profile may have multiple outputs from one input and that's why **outputs** is dictionary where key represents **filename suffix** to avoid overriding files with same name and value represents definition itself. Definition may contain multiple optional keys.
| Key | Description | Type | Example |
| --- | --- | --- | --- |
| **width** | Width of output. | int | 1920 |
| **height** | Height of output. | int | 1080 |
| **letter_box** | Set letterbox ratio. | float | 2.35 |
| **ext** | Extension of output file(s). | str | "mov" |
| **tags** | Tags added to new representation. | list | [here](#new-representation-tags-tags) |
| **ffmpeg_args** | Additional FFmpeg arguments. | dict | [here](#ffmpeg-arguments-ffmpeg_args) |
| **filter** | Filters definition. | dict | [here](#output-filters-filter) |
:::note
As metioned above **all keys are optional**. If they are not filled at all, then **"ext"** is filled with input's file extension and resolution keys **"width"** and **"heigh"** are filled from instance data, or from input resolution if instance doesn't have set them.
:::
:::important resolution
It is not possible to enter only **"width"** or only **"height"**. In that case set values will be skipped.
:::
#### New representation tags (`tags`)
You can add tags to representation created during extracting process. This might help to define what should happen with representation in upcomming plugins.
| Tag | Description |
| --- | --- |
| **burnin** | Add burnins with predefined values into the output. |
| **preview** | Will be used as preview in Ftrack. |
| **reformat** | Rescale to format based on width and height keys. |
| **bake-lut** | Bake LUT into the output (if is available path in data). |
| **slate-frame** | Add slate frame at the beggining of video. |
| **no-handles** | Remove the shot handles from the output. |
| **sequence** | Generate a sequence of images instead of single frame.<br />Is applied only if **"ext"** of output is image extension e.g.: png or jpg/jpeg. |
:::important Example
Tags key must contain list of strings.
```json
{
"tags": ["burnin", "preview"]
...
}
```
:::
#### FFmpeg arguments (`ffmpeg_args`)
It is possible to set additional FFmpeg arguments. Arguments are split into 4 categories **"input"**, **"video_filters"**, **"audio_filters"** and **"output"**.
| Key | Description | Type | Example |
| --- | --- | --- | --- |
| **input** | FFmpeg arguments added before video/image input. | list | ["-gamma 2.2"] |
| **video_filters** | All values which should be in `-vf` or `-filter:v` argument. | list | ["scale=iw/2:-1"] |
| **audio_filters** | All values which should be in `-af` or `-filter:a` argument. | list | ["loudnorm"] |
| **output** | FFmpeg arguments added before output filepath. | list | ["-pix_fmt yuv420p", "-crf 18"] |
:::important Example
For more information about FFmpeg arguments please visit [official documentation](https://ffmpeg.org/documentation.html).
```json
{
"ffmpeg_args": {
"input": ["-gamma 2.2"],
"video_filters": ["yadif=0:0:0", "scale=iw/2:-1"],
"output": ["-pix_fmt yuv420p", "-crf 18"]
}
...
}
```
:::
#### Output filters (`filter`)
Even if profile has filtering options it is possible that output definitions require to be filtered by all instance **families** or representation's **tags**.
Families filters in output's `filter` will check all instance's families and may check for single family or combination of families.
| Key | Description | Type | Example |
| --- | --- | --- | --- |
| **families** | At least one family item must match instance's families to process definition. | list | ["review"] |
| **tags** | At least one tag from list must be in representation's tags to process definition. | list | ["preview"] |
:::important Example
These filters helps with explicit processing but do **NOT** use them if it's not necessary.
```json
{
"filter": {
"families": [
"review",
["ftrack", "render2d"]
],
"tags": ["preview"],
}
...
}
```
In this example representation's tags must contain **"preview"** tag and instance's families must contain **"review"** family, or both **"ftrack"** and **"render2d"** families.
:::
#### Simple example
This example just create **mov** output with filename suffix **"simplemov"** for all representations with supported extensions.
```json
{
"ExtractReview": {
"profiles": [{
"outputs": {
/* Filename suffix "simplemov"*/
"simplemov": {
/* Output extension will be "mov"*/
"ext": "mov"
}
}
}]
}
}
```
#### More complex example
:::note
This is just usage example, without relevant data. Do **NOT** use these presets as default in production.
:::
```json
{
"ExtractReview": {
"profiles": [
{
/* 1. profile - Without filters will be used as default. */
"outputs": {
/* Extract single mov Prores 422 with burnins, slate and baked lut. */
"prores": {
"ext": "mov",
"codec": [
"-codec:v prores_ks",
"-profile:v 3"
],
"tags": ["burnin", "reformat", "bake-lut", "slate-frame"]
}
}
}, {
/* 2. profile - Only for Nuke, "compositing" task and instance family "render2d". */
"hosts": ["nuke"],
"tasks": ["compositing"],
"families": ["render2d"],
"outputs": {
/* Extract preview mov with burnins and without handles.*/
"h264": {
"ext": "mov",
"ffmpeg_args": {
"output": [
"-pix_fmt yuv420p",
]
},
"tags": ["burnin", "preview", "no-handles"]
},
/* Also extract mxf with slate */
"edit": {
"ext": "mxf",
"ffmpeg_args": {
"output": [
"-codec:v dnxhd",
"-profile:v dnxhr_444",
"-pix_fmt yuv444p10le",
"-b:v 185M",
"-ar 48000",
"-qmax 51"
]
},
"tags": ["slate-frame"]
}
}
}, {
/* 3. profile - Default profile for Nuke and Maya. */
"hosts": ["maya", "nuke"],
"outputs": {
/* Extract preview mov with burnins and with forced resolution. */
"h264": {
"width": 1920,
"height": 1080,
"ext": "mov",
"ffmpeg_args": {
"input": [
"-gamma 2.2"
],
"output": [
"-pix_fmt yuv420p",
"-crf 18",
"-intra"
]
},
"tags": ["burnin", "preview"]
}
}
}
]
}
}
```
### `ExtractBurnin`
Plugin is responsible for adding burnins into review representations.
Burnins are text values painted on top of input and may be surrounded with box in 6 available positions `Top Left`, `Top Center`, `Top Right`, `Bottom Left`, `Bottom Center`, `Bottom Right`.
![presets_plugins_extract_burnin](assets/presets_plugins_extract_burnin_01.png)
ExtractBurnin creates new representations based on plugin presets and representations in instance. Presets may contain 3 keys **options**, **profiles** and **fields**.
#### Burnin settings (`options`)
Options is dictionary where you can set the global appearance of burnins. It is possible to not fill options at all, in that case default values are used.
| Key | Description | Type | Example | Default |
| --- | --- | --- | --- | --- |
| **font_size** | Size of text. | float | 24 | 42 |
| **font_color** | Color of text. | str | [FFmpeg color documentation](https://ffmpeg.org/ffmpeg-utils.html#color-syntax) | "white" |
| **opacity** | Opacity of text. | float | 0.7 | 1 |
| **x_offset** | Horizontal margin around text and box. | int | 0 | 5 |
| **y_offset** | Vertical margin around text and box. | int | 0 | 5 |
| **bg_padding** | Padding for box around text. | int | 0 | 5 |
| **bg_color** | Color of box around text. | str | [FFmpeg color documentation](https://ffmpeg.org/ffmpeg-utils.html#color-syntax) | "black" |
| **bg_opacity** | Opacity of box around text. | float | 1 | 0.5 |
#### Burnin profiles (`profiles`)
Plugin process is skipped if `profiles` are not set at all. Profiles contain list of profile items. Each profile item has **burnins**, where definitions of possible burnins are, and may have specified filters for **hosts**, **tasks** and **families**. Filters work the same way as described in [ExtractReview](#profile-filters).
#### Profile burnins
Profile may have set multiple burnin outputs from one input and that's why **burnins** is dictionary where key represents **filename suffix** to avoid overriding files with same name and value represents burnin definition. Burnin definition may contain multiple optional keys.
| Key | Description | Type | Example |
| --- | --- | --- | --- |
| **top_left** | Top left corner content. | str | "{dd}.{mm}.{yyyy}" |
| **top_centered** | Top center content. | str | "v{version:0>3}" |
| **top_right** | Top right corner content. | str | "Static text" |
| **bottom_left** | Bottom left corner content. | str | "{asset}" |
| **bottom_centered** | Bottom center content. | str | "{username}" |
| **bottom_right** | Bottom right corner content. | str | "{frame_start}-{current_frame}-{frame_end}" |
| **options** | Options overrides for this burnin definition. | dict | [Options](#burnin-settings-options) |
| **filter** | Filters definition. | dict | [ExtractReview output filter](#output-filters-filter) |
:::important Position keys
Any position key `top_left` -> `bottom_right` is skipped if is not set, contain empty string or is set to `null`.
And position keys are not case sensitive so instead of key `top_left` can be used `TOP_LEFT` or `Top_Left`
:::
:::note Filename suffix
Filename suffix is appended to filename suffix of source representation.
If source representation has suffix **"h264"** and burnin suffix is **"client"** then final suffix is **"h264_client"**.
:::
**Available keys in burnin content**
- It is possible to use same keys as in [Anatomy](admin_config#available-keys).
- It is allowed to use [Anatomy templates](admin_config#anatomy) themselves in burnins if they can be filled with available data.
- Additional keys in burnins:
| Burnin key | Description |
| --- | --- |
| frame_start | First frame number. |
| frame_end | Last frame number. |
| current_frame | Frame number for each frame. |
| duration | Count number of frames. |
| resolution_width | Resolution width. |
| resolution_height | Resolution height. |
| fps | Fps of an output. |
| timecode | Timecode by frame start and fps. |
:::warning
`timecode` is specific key that can be **only at the end of content**. (`"BOTTOM_RIGHT": "TC: {timecode}"`)
:::
```json
{
"profiles": [{
"burnins": {
"example": {
"TOP_LEFT": "{dd}.{mm}.{yyyy}",
/* Use anatomy template values. */
"TOP_CENTERED": "{anatomy[publish][path]}",
/* Python's formatting:
":0>3" adds padding to version number to have 3 digits. */
"TOP_RIGHT": "v{version:0>3}",
"BOTTOM_LEFT": "{frame_start}-{current_frame}-{frame_end}",
"BOTTOM_CENTERED": "{asset}",
"BOTTOM_RIGHT": "{username}"
}
}
}]
...
}
```
#### Default content values (`fields`)
If you want to set position content values for all or most of burnin definitions, you can set them in **"fields"**. They will be added to every burnin definition in all profiles. Value can be overriden if same position key is filled in burnin definiton.
```json
{
"fields": {
"TOP_LEFT": "{yy}-{mm}-{dd}",
"TOP_CENTERED": "{username}",
"TOP_RIGHT": "v{version:0>3}"
},
"profiles": [{
"burnins": {
/* example1 has empty definition but top left, center and right values
will be filled. */
"example1": {},
/* example2 has 2 overrides. */
"example2": {
/* Top left value is overriden with asset name. */
"TOP_LEFT": "{asset}",
/* Top center will be skipped. */
"TOP_CENTERED": null
}
}
}]
}
```
#### Full presets example
:::note
This is just usage example, without relevant data. Do **NOT** use these presets as default in production.
:::
```json
{
"ExtractBurnin": {
"options": {
"opacity": 1,
"x_offset": 5,
"y_offset": 5,
"bg_padding": 5,
"bg_opacity": 0.5,
"font_size": 42
},
"fields": {
"TOP_LEFT": "{yy}-{mm}-{dd}",
"TOP_RIGHT": "v{version:0>3}"
},
"profiles": [{
"burnins": {
"burnin": {
"options": {
"opacity": 1
},
"TOP_LEFT": "{username}"
}
}
}, {
"families": ["animation", "pointcache", "model"],
"tasks": ["animation"],
"burnins": {}
}, {
"families": ["render"],
"tasks": ["compositing"],
"burnins": {
"burnin": {
"TOP_LEFT": "{yy}-{mm}-{dd}",
"TOP_RIGHT": "v{version:0>3}",
"BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}",
"BOTTOM_LEFT": "{username}"
},
"burnin_ftrack": {
"filter": {
"families": ["ftrack"]
},
"BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}",
"BOTTOM_LEFT": "{username}"
},
"burnin_v2": {
"options": {
"opacity": 0.5
},
"TOP_LEFT": "{yy}-{mm}-{dd}",
"TOP_RIGHT": "v{version:0>3}"
}
}
}, {
"families": ["rendersetup"],
"burnins": {
"burnin": {
"TOP_LEFT": "{yy}-{mm}-{dd}",
"BOTTOM_LEFT": "{username}"
}
}
}, {
"tasks": ["animation"],
"burnins": {
"burnin": {
"TOP_RIGHT": "v{version:0>3}",
"BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}"
}
}
}]
}
}
```
### `ProcessSubmittedJobOnFarm`
```json
{
"ProcessSubmittedJobOnFarm": {
"aov_filter": {
"host": ["aov_name"],
"maya": ["beauty"]
},
"deadline_pool": ""
}
}
```
## Maya
### load.json
### `colors`
maya outliner colours for various families
```python
"colors": {
"model": [0.821, 0.518, 0.117],
"rig": [0.144, 0.443, 0.463],
"pointcache": [0.368, 0.821, 0.117],
"animation": [0.368, 0.821, 0.117],
"ass": [1.0, 0.332, 0.312],
"camera": [0.447, 0.312, 1.0],
"fbx": [1.0, 0.931, 0.312],
"mayaAscii": [0.312, 1.0, 0.747],
"setdress": [0.312, 1.0, 0.747],
"layout": [0.312, 1.0, 0.747],
"vdbcache": [0.312, 1.0, 0.428],
"vrayproxy": [0.258, 0.95, 0.541],
"yeticache": [0.2, 0.8, 0.3],
"yetiRig": [0, 0.8, 0.5]
}
```
### publish.json
### `ValidateModelName`
```python
"ValidateModelName": {
"enabled": false,
"material_file": "/path/to/shader_name_definition.txt",
"regex": "(.*)_(\\d)*_(?P<shader>.*)_(GEO)"
},
```
### `ValidateShaderName`
```python
"ValidateShaderName": {
"enabled": false,
"regex": "(?P<asset>.*)_(.*)_SHD"
}
```
## Nuke
### create.json
### `CreateWriteRender`
```python
"CreateWriteRender": {
"fpath_template": "{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}"
}
```
### publish.json
### `ExtractThumbnail`
Plugin responsible for generating thumbnails with colorspace controlled by Nuke. Reformat node will secure proper framing within the default workfile screen space.
```json
{
"nodes": {
"Reformat": [
["type", "to format"],
["format", "HD_1080"],
["filter", "Lanczos6"],
["black_outside", true],
["pbb", false]
]
}
}
```
### `ExtractReviewDataMov`
`viewer_lut_raw` **true** will publish the baked mov file without any colorspace conversion. It will be baked with the workfile workspace. This can happen in case the Viewer input process uses baked screen space luts.
#### baking with controlled colorspace
Some productions might be using custom OCIO config files either for whole project, sequence or even individual shots. In that case we can use **display roles** to let compositors use their preferred viewer space, but also make sure baking of outputs is happening in a defined space for clients reviews.
`bake_colorspace_fallback` this will be used if for some reason no space defined in `shot_grade_rec709` is found on shot's _config.ocio_
> be aware this will only work if `viewer_lut_raw` is on _false_
```json
{
"viewer_lut_raw": false,
"bake_colorspace_fallback": "show_lut_rec709",
"bake_colorspace_main": "shot_grade_rec709"
}
```
## NukeStudio
### Publish.json
Destination of the following example codes:
[`presets/plugins/nukestudio/publish.json`](https://github.com/pypeclub/pype-config/blob/develop/presets/plugins/nukestudio/publish.json)
### `CollectInstanceVersion`
Activate this plugin if you want your published plates to always have the same version as the hiero project they were published from. If this plugin is off, plate versioning automatically finds the next available version in the database.
```json
{
"CollectInstanceVersion": {
"enabled": true
}
}
```
### `ExtractReviewCutUpVideo`
Example of tag which could be added into the plugin preset.
In this case because we might have 4K plates but we would like to publish all review files reformated to 2K.
[details of available tags](#preset-attributes)
```json
{
"ExtractReviewCutUpVideo": {
"tags_addition": ["reformat"]
}
}
```
## Standalone Publisher
Documentation yet to come.

View file

@ -0,0 +1,191 @@
---
id: admin_presets_tools
title: Presets > Tools
sidebar_label: Tools
---
## Colorspace
We provide two examples of possible settings for nuke, but these can vary wildly between clients and projects.
### `Default` [dict]
path: `pype-config/presets/colorspace/default.json`
```python
"nuke": {
"root": {
"colorManagement": "Nuke",
"OCIO_config": "nuke-default",
"defaultViewerLUT": "Nuke Root LUTs",
"monitorLut": "sRGB",
"int8Lut": "sRGB",
"int16Lut": "sRGB",
"logLut": "Cineon",
"floatLut": "linear"
},
"viewer": {
"viewerProcess": "sRGB"
},
"write": {
"render": {
"colorspace": "linear"
},
"prerender": {
"colorspace": "linear"
},
"still": {
"colorspace": "sRGB"
}
}
},
```
### `aces103-cg` [dict]
path: `pype-config/presets/colorspace/aces103-cg.json`
```python
"nuke": {
"root": {
"colorManagement": "OCIO",
"OCIO_config": "aces_1.0.3",
"workingSpaceLUT": "ACES - ACEScg",
"defaultViewerLUT": "OCIO LUTs",
"monitorLut": "ACES/sRGB D60 sim.",
"int8Lut": "Utility - sRGB - Texture",
"int16Lut": "Utility - sRGB - Texture",
"logLut": "Input - ARRI - V3 LogC (EI800) - Wide Gamut",
"floatLut": "ACES - ACES2065-1"
},
"viewer": {
"viewerProcess": "sRGB D60 sim. (ACES)"
},
"write": {
"render": {
"colorspace": "ACES - ACEScg"
},
"prerender": {
"colorspace": "ACES - ACEScg"
},
"still": {
"colorspace": "Utility - Curve - sRGB"
}
}
},
```
## Creator Defaults
path: `pype-config/presets/tools/creator.json`
This preset tells the creator tools what family should be pre-selected in different tasks. Keep in mind that the task is matched loosely so for example any task with 'model' in it's name will be considered a modelling task for these purposes.
`"Family name": ["list, "of, "tasks"]`
```python
"Model": ["model"],
"Render Globals": ["light", "render"],
"Layout": ["layout"],
"Set Dress": ["setdress"],
"Look": ["look"],
"Rig": ["rigging"]
```
## Project Folder Structure
path: `pype-config/presets/tools/project_folder_structure.json`
Defines the base folder structure for a project. This is supposed to act as a starting point to quickly creat the base of the project. You can add `[ftrack.entityType]` after any of the folders here and they will automatically be also created in ftrack project.
### `__project_root__` [dict]
```python
"__project_root__": {
"_prod" : {},
"_resources" : {
"footage": {
"ingest": {},
"offline": {}
},
"audio": {},
"art_dept": {},
},
"editorial" : {},
"assets[ftrack.Library]": {
"characters[ftrack]": {},
"locations[ftrack]": {}
},
"shots[ftrack.Sequence]": {
"editorial[ftrack.Folder]": {}
}
}
```
## Software Folders
path: `pype-config/presets/tools/sw_folders.json`
Defines extra folders to be created inside the work space when particular task type is launched. Mostly used for configs, that use {app} key in their work template and want to add hosts that are not supported yet.
```python
"compositing": ["nuke", "ae"],
"modeling": ["maya", "app2"],
"lookdev": ["substance"],
"animation": [],
"lighting": [],
"rigging": []
```
## Tray Items
path: `pype-config/presets/tray/menu_items.json`
This preset let's admins to turn different pype modules on and off from the tray menu, which in turn makes them unavailable across the pipeline
### `item_usage` [dict]
```python
"item_usage": {
"User settings": false,
"Ftrack": true,
"Muster": false,
"Avalon": true,
"Clockify": false,
"Standalone Publish": true,
"Logging": true,
"Idle Manager": true,
"Timers Manager": true,
"Rest Api": true
},
```
## Muster Templates
path: `pype-config/presets/muster/templates_mapping.json`
Muster template mapping maps Muster template ID to name of renderer. Initially it is set Muster defaults. About templates and Muster se Muster Documentation. Mapping is defined in:
Keys are renderer names and values are templates IDs.
```python
"3delight": 41,
"arnold": 46,
"arnold_sf": 57,
"gelato": 30,
"harware": 3,
"krakatoa": 51,
"file_layers": 7,
"mentalray": 2,
"mentalray_sf": 6,
"redshift": 55,
"renderman": 29,
"software": 1,
"software_sf": 5,
"turtle": 10,
"vector": 4,
"vray": 37,
"ffmpeg": 48
```

View file

@ -0,0 +1,287 @@
---
id: admin_pype_commands
title: Pype Commands Reference
sidebar_label: Pype Commands
---
## Help
To get all available commands:
```sh
pype --help
```
To get help on particular command:
```sh
pype <command> --help
```
--------------------
## `clean`
Command to clean Python bytecode files from Pype and it's environment. Useful
for developers after code or environment update.
--------------------
## `coverage`
### `--pype`
- without this option, tests are run on *pype-setup* only.
Generate code coverage report.
```sh
pype coverage --pype
```
--------------------
## `deploy`
To deploy Pype:
```sh
pype deploy
```
### `--force`
To force re-deploy:
```sh
pype deploy --force
```
---------------------------
## `download`
To download required dependencies:
```sh
pype download
```
--------------------
## `eventserver`
This command launches ftrack event server.
This should be ideally used by system service (such us systemd or upstart
on linux and window service).
You have to set either proper environment variables to provide URL and
credentials or use option to specify them. If you use `--store_credentials`
provided credentials will be stored for later use.
To run ftrack event server:
```sh
pype eventserver --ftrack-url=<url> --ftrack-user=<user> --ftrack-api-key=<key> --ftrack-events-path=<path> --no-stored-credentials --store-credentials
```
### `--debug`
- print debug info
### `--ftrack-url`
- URL to ftrack server
### `--ftrack-user`
- user name to log in to ftrack
### `--ftrack-api-key`
- ftrack api key
### `--ftrack-events-path`
- path to event server plugins
### `--no-stored-credentials`
- will use credential specified with options above
### `--store-credentials`
- will store credentials to file for later use
--------------------
## `install`
To install Pype:
```sh
pype install
```
### `--force`
To reinstall Pype:
```sh
pype install --force
```
### `--offline`
To install Pype in offline mode:
```sh
pype install --offline
```
To reinstall Pype in offline mode:
```sh
pype install --offline --force
```
--------------------
## `launch`
Launch application in Pype environment.
### `--app`
Application name - this should be the same as it's [defining toml](admin_hosts#launchers) file (without .toml)
### `--project`
Project name
### `--asset`
Asset name
### `--task`
Task name
### `--tools`
*Optional: Additional tools environment files to add*
### `--user`
*Optional: User on behalf to run*
### `--ftrack-server` / `-fs`
*Optional: Ftrack server URL*
### `--ftrack-user` / `-fu`
*Optional: Ftrack user*
### `--ftrack-key` / `-fk`
*Optional: Ftrack API key*
For example to run Python interactive console in Pype context:
```sh
pype launch --app python --project my_project --asset my_asset --task my_task
```
--------------------
## `make_docs`
Generate API documentation into `docs/build`
```sh
pype make_docs
```
--------------------
## `mongodb`
To run testing mongodb database (requires mongoDB installed on the workstation):
```sh
pype mongodb
```
--------------------
## `publish`
Pype takes JSON from provided path and use it to publish data in it.
```sh
pype publish <PATH_TO_JSON>
```
### `--gui`
- run Pyblish GUI
### `--debug`
- print more verbose infomation
--------------------
## `test`
### `--pype`
- without this option, tests are run on *pype-setup* only.
Run test suite on Pype:
```sh
pype test --pype
```
:::note Pytest
For more information about testing see [Pytest documentation](https://docs.pytest.org/en/latest/)
:::
--------------------
## `texturecopy`
Copy specified textures to provided asset path.
It validates if project and asset exists. Then it will
copy all textures found in all directories under `--path` to destination
folder, determined by template texture in **anatomy**. I will use source
filename and automatically rise version number on directory.
Result will be copied without directory structure so it will be flat then.
Nothing is written to database.
### `--project`
### `--asset`
### `--path`
```sh
pype texturecopy --project <PROJECT_NAME> --asset <ASSET_NAME> --path <PATH_TO_JSON>
```
--------------------
## `tray`
To launch Tray:
```sh
pype tray
```
### `--debug`
To launch Tray with debugging information:
```sh
pype tray --debug
```
--------------------
## `update-requirements`
Synchronize dependecies in your virtual environment with requirement.txt file.
Equivalent of running `pip freeze > pypeapp/requirements.txt` from your virtual
environmnet. This is useful for development purposes.
```sh
pype update-requirements
```
--------------------
## `validate`
To validate deployment:
```sh
pype validate
```
--------------------
## `validate-config`
To validate JSON configuration files for syntax errors:
```sh
pype validate-config
```

View file

@ -0,0 +1,17 @@
---
id: admin_setup_troubleshooting
title: Setup Troubleshooting
sidebar_label: Setup Troubleshooting
---
## SSL Server certificates
Python is strict about certificates when connecting to server with SSL. If
certificate cannot be validated, connection will fail. Therefor care must be
taken when using self-signed certificates to add their certification authority
to trusted certificates.
Also please note that even when using certificates from trusted CA, you need to
update your trusted CA certificates bundle as those certificates can change.
So if you receieve SSL error `cannot validate certificate` or similar, please update root CA certificate bundle on machines and possibly **certifi** python package in Pype virtual environment - just edit `pypeapp/requirements.txt` and update its version. You can find current versions on [PyPI](https://pypi.org).

7
website/docs/api.md Normal file
View file

@ -0,0 +1,7 @@
---
id: api
title: Pype API
sidebar_label: API
---
Work in progress

View file

@ -0,0 +1,53 @@
---
id: artist_concepts
title: Key concepts
sidebar_label: Key Concepts
---
## Glossary
### Asset
In our pipeline all the main entities the project is made from are internally considered *'Assets'*. Episode, sequence, shot, character, prop, etc. All of these behave identically in the pipeline. Asset names need to be absolutely unique within the project because they are their key identifier.
### Subset
Usually, an asset needs to be created in multiple *'flavours'*. A character might have multiple different looks, model needs to be published in different resolutions, a standard animation rig might not be useable in a crowd system and so on. 'Subsets' are here to accommodate all this variety that might be needed within a single asset. A model might have subset: *'main'*, *'proxy'*, *'sculpt'*, while data of *'look'* family could have subsets *'main'*, *'dirty'*, *'damaged'*. Subsets have some recommendations for their names, but ultimately it's up to the artist to use them for separation of publishes when needed.
### Version
A numbered iteration of a given subset. Each version contains at least one [representation][daa74ebf].
[daa74ebf]: #representation "representation"
### Representation
Each published variant can come out of the software in multiple representations. All of them hold exactly the same data, but in different formats. A model, for example, might be saved as `.OBJ`, Alembic, Maya geometry or as all of them, to be ready for pickup in any other applications supporting these formats.
### Family
Each published [subset][3b89d8e0] can have exactly one family assigned to it. Family determines the type of data that the subset holds. Family doesn't dictate the file type, but can enforce certain technical specifications. For example Pype default configuration expects `model` family to only contain geometry without any shaders or joins when it is published.
[3b89d8e0]: #subset "subset"
### Host
General term for Software or Application supported by Pype and Avalon. These are usually DCC applications like Maya, Houdini or Nuke, but can also be a web based service like Ftrack or Clockify.
### Tool
Small piece of software usually dedicated to a particular purpose. Most of pype and avalon tools have GUI, but some are command line only
### Publish
Process of exporting data from your work scene to versioned, immutable file that can be used by other artists in the studio.
### Load
Process of importing previously published subsets into your current scene, using any of the pype tools.
Loading asset using proper tools will ensure that all your scene content stays version controlled and updatable at a later point

View file

@ -0,0 +1,82 @@
---
id: artist_ftrack
title: Ftrack
sidebar_label: Artist
---
# How to use Ftrack in Pype
## Login to Ftrack module in Pype tray (best case scenario)
1. Launch Pype tray if not launched yet
2. *Ftrack login* window pop up on start
- or press **login** in **Ftrack menu** to pop up *Ftrack login* window
![ftrack-login-2](assets/ftrack/ftrack-login_50.png)
3. Press `Ftrack` button
![Login widget](assets/ftrack/ftrack-login_1.png)
4. Web browser opens
5. Sign in Ftrack if you're requested
![ftrack-login-2](assets/ftrack/ftrack-login_2.png)
6. Message is shown
![ftrack-login-3](assets/ftrack/ftrack-login_3.png)
7. Close message and you're ready to use actions - continue with [Application launch](#application-launch-best-case-scenario)
---
## Application launch (best case scenario)
1. Make sure Pype tray is running and you passed [Login to Ftrack](#login-to-ftrack-module-in-pype-tray-best-case-scenario) guide
2. Open web browser and go to your studio Ftrack web page *(e.g. https://mystudio.ftrackapp.com/)*
3. Locate the task on which you want to run the application
4. Display actions for the task
![ftrack-login-3](assets/ftrack/ftrack-login_60.png)
5. Select application you want to launch
- application versions may be grouped to one action in that case press the action to reveal versions to choose *(like Maya in the picture)*
![ftrack-login-3](assets/ftrack/ftrack-login_71-small.png)
6. Work
---
## Change Ftrack user
1. Log out the previous user from Ftrack Web app *(skip if new is already logged)*
![ftrack-login-3](assets/ftrack/ftrack-login_80-small.png)
2. Log out the previous user from Ftrack module in tray
![ftrack-login-3](assets/ftrack/ftrack-login_81.png)
3. Follow [Login to Ftrack](#login-to-ftrack-module-in-pype-tray-best-case-scenario) guide
---
## What if...
### Ftrack login window didn't pop up and Ftrack menu is not in tray
**1. possibility - Pype tray didn't load properly**
- try to restart tray
**2. possibility - Ftrack is not set in Pype**
- inform your administrator
### Web browser did not open
**1. possibility - button was not pressed**
- Try to press again the `Ftrack` button in *Ftrack login* window
**2. possibility - Ftrack URL is not set or is not right**
- Check **Ftrack URL** value in *Ftrack login* window
- Inform your administrator if URL is incorrect and launch tray again when administrator fix it
**3. possibility - Ftrack Web app can't be reached the way Pype use it**
- Enter your **Username** and [API key](#where-to-find-api-key) in *Ftrack login* window and press **Login** button
### Ftrack action menu is empty
**1. possibility - Pype tray is not running**
- launch Pype tray
**2. possibility - You didn't go through Login to Ftrack guide**
- please go through [Login to Ftrack](#login-to-ftrack-module-in-pype-tray-best-case-scenario) guide
**3. possibility - User logged to Ftrack Web is not the same as user logged to Ftrack module in tray**
- Follow [Change user](#change-user) guide
**4. possibility - Project don't have set applications**
- ask your Project Manager to check if he set applications for the project

View file

@ -0,0 +1,43 @@
---
title: Getting started with Pype
sidebar_label: Getting started
---
## Basic use
If you have Pype installed and deployed, you can start using it. Ideally you should
have Pype icon on your desktop or even have your computer set up so Pype will start
automatically.
Otherwise for most common stuff there are so-called *launchers* - scripts you can just run from desktop shortcut or
whatever and you are done. There is also manual invocation of Pype command you can use
for slightly more control.
:::tip Launchers
Launchers can be found in `pype/launchers` directory. They are basically shell scripts running Pype. You can create shortcuts on desktop for them for easy Pype launching.
:::
### Starting tray manually
**Pype Tray** is most common Pype command for artists. It runs Pype GUI in system tray
from which you can work with Pype. To use Pype, **Pype Tray** must be running.
To run **Pype Tray**:
```sh
pype tray
```
or run launcher `launchers/pype_tray.bat` (Windows) or `launchers/pype_tray.sh` (Linux)
:::note Debugging
To get more information on what's going on in Pype, you can run Tray with `--debug` option. This will show text console window with lots of useful information.
```sh
pype tray --debug
```
:::
### Advanced use
For more advanced use of Pype command please visit [Admin section](admin_pype_commands).

View file

@ -0,0 +1,17 @@
---
id: artist_hosts
title: Hosts
sidebar_label: Hosts
---
## Maya
## Houdini
## Nuke
## Fusion
## Unreal
## System

View file

@ -0,0 +1,128 @@
---
id: artist_hosts_harmony
title: Harmony
sidebar_label: Harmony
---
## Available Tools
- [Work Files](artist_tools.md#workfiles)
- [Create](artist_tools.md#creator)
- [Load](artist_tools.md#loader)
- [Publish](artist_tools.md#publisher)
- [Manage](artist_tools.md#inventory)
:::note
Only one tool can be open at a time. If you open a tool while another tool is open, it will wait in queue for the existing tool to be closed. Once the existing tool is closed, the new tool will open.
:::
## Usage
The integration creates an `Avalon` menu entry where all related tools are located.
:::note
Menu creation can be temperamental. Its best to start Harmony and do nothing else until the application is fully launched.
If you dont see the `Avalon` menu, then follow this to create it:
- Go to the Script Editor
- Find the script called `TB_sceneOpened.js` and run it.
- Choose the `start` method to run.
:::
### Workfiles
`Avalon > Workfiles`
Work files are temporarily stored locally, in `[user]/.avalon/harmony`, to reduce network bandwidth. When saving the Harmony scene, a background process ensures the network files are updated.
:::important
Because the saving to the network location happens in the background, be careful when quickly saving and closing Harmony (and the terminal window) since an interrupted saving to the network location can corrupt the workfile. To be sure the workfile is saved to the network location look in the terminal for a line similar to this:
`DEBUG:avalon.harmony.lib:Saved "[Local Scene Directory]" to "[Network Scene Directory]\[Name Of Workfile].zip"`
:::
### Create
`Avalon > Create`
![Creator](assets/harmony_creator.PNG)
These are the families supported in Harmony:
- `Render`
- This instance is for generating a render and review. This is a normal write node, but only PNGs are supported at the moment.
- `Template`
- This instance is for generating a templates. This is a normal composite node, which you can connect any number of nodes to.
- Any connected nodes will be published along with their dependencies and any back drops.
- (`Palette`)
- Palettes are indirectly supported in Harmony. This means you just have to have palettes in your scene to publish them.
When you `Use selection` on creation, the last selected node will be connected to the created node.
### Publish
`Avalon > Publish`
![Publish](assets/photoshop_publish.PNG)
This tool will run through checks to make sure the contents you are publishing is correct. Hit the "Play" button to start publishing.
You may encounter issues with publishing which will be indicated with red squares. If these issues are within the validation section, then you can fix the issue. If there are issues outside of validation section, please let the Pype team know.
#### Repair Validation Issues
All validators will give some description about what the issue is. You can inspect this by going into the validator through the arrow:
![Inspect](assets/photoshop_publish_inspect.PNG)
You can expand the errors by clicking on them for more details:
![Expand](assets/photoshop_publish_expand.PNG)
Some validator have repair actions, which will fix the issue. If you can identify validators with actions by the circle icon with an "A":
![Actions](assets/photoshop_publish_actions.PNG)
To access the actions, you right click on the validator. If an action runs successfully, the actions icon will turn green. Once all issues are fixed, you can just hit the "Refresh" button and try to publish again.
![Repair](assets/photoshop_publish_repair.gif)
### Load
`Avalon > Load`
![Loader](assets/photoshop_loader.PNG)
The supported families for Harmony are:
- `image`
- `harmony.template`
- Only import is current supported for templates.
- `harmony.palette`
- Loaded palettes are moved to the top of the colour stack, so they will acts as overrides. Imported palettes are left in the scene.
- `workfile`
- Only of type `zip`.
To load, right-click on the subset you want and choose a representation:
![Loader](assets/photoshop_loader_load.gif)
:::note
Loading templates or workfiles will import the contents into scene. Referencing is not supported at the moment, so you will have to load newer versions into the scene.
:::
### Manage
`Avalon > Manage`
![Loader](assets/photoshop_manage.PNG)
You can switch to a previous version of the image or update to the latest.
![Loader](assets/photoshop_manage_switch.gif)
![Loader](assets/photoshop_manage_update.gif)
:::note
Images and image sequences will be loaded into the scene as read nodes can coloured green. On startup the pipeline checks for any outdated read nodes and colours them red.
- <span style={{color:'green'}}>**Green**</span> = Up to date version in scene.
- <span style={{color:'red'}}>**Red**</span> = Outdated version in scene.
:::

View file

@ -0,0 +1,693 @@
---
id: artist_hosts_maya
title: Maya
sidebar_label: Maya
---
## Pype global tools
- [Set Context](artist_tools.md#set-context)
- [Work Files](artist_tools.md#workfiles)
- [Create](artist_tools.md#creator)
- [Load](artist_tools.md#loader)
- [Manage (Inventory)](artist_tools.md#inventory)
- [Publish](artist_tools.md#publisher)
- [Library Loader](artist_tools.md#library-loader)
## Working with Pype in Maya
Pype is here to ease you the burden of working on project with lots of
collaborators, worrying about naming, setting stuff, browsing through endless
directories, loading and exporting and so on. To achieve that, Pype is using
concept of being _"data driven"_. This means that what happens when publishing
is influenced by data in scene. This can by slightly confusing so let's get to
it with few examples.
## Publishing models
### Intro
Publishing models in Maya is pretty straightforward. Create your model as you
need. You need to adhere to specifications of your studio that can be different
between studios and projects but by default your geometry has to be named properly.
For example `sphere_GEO` or `cube1_GEO`. Geometry needs to have freezed transformations
and must reside under one group, for example `model_GRP`.
![Model example](assets/maya-model_hierarchy_example.jpg)
Note that `sphere_GEO` has frozen transformations.
### Creating instance
Now create **Model instance** from it to let Pype know what in the scene you want to
publish. Go **Pype → Create... → Model**
![Model create instance](assets/maya-model_create_instance.jpg)
`Asset` field is a name of asset you are working on - it should be already filled
with correct name as you've started Maya or switched context to specific asset. You
can edit that field to change it to different asset (but that one must already exists).
`Subset` field is a name you can decide on. It should describe what kind of data you
have in the model. For example, you can name it `Proxy` to indicate that this is
low resolution stuff. See [Subset](artist_concepts#subset).
:::note LOD support
By changing subset name you can take advantage of _LOD support_ in Pype. Your
asset can contain various resolution defined by different subsets. You can then
switch between them very easy using [Inventory (Manage)](artist_tools#inventory).
There LODs are conveniently grouped so they don't clutter Inventory view.
Name your subset like `main_LOD1`. Important part is that `_LOD1`. You can have as many LODs as you need.
:::
Read-only field just under it show final subset name, adding subset field to
name of the group you have selected.
`Use selection` checkbox will use whatever you have selected in Outliner to be
wrapped in Model instance. This is usually what you want. Click on **Create** button.
You'll notice then after you've created new Model instance, there is new set
in Outliner called after your subset, in our case it is `modelMain`.
And that's it, you have your first model ready to publish.
Now save your scene (if you didn't do it already). You will notice that path
in Save dialog is already set to place where scenes related to modeling task on
your asset should reside. As in our case we are working on asset called
**Ben** and on task **modeling**, path relative to your project directory will be
`project_XY/assets/ben/work/modeling`. Let's save our scene as `model_test_v01`.
### Publishing models
Now let's publish it. Go **Pype → Publish...**. You will be presented with following window:
![Model publish](assets/maya-model_pre_publish.jpg)
Note that content of this window can differs by your pipeline configuration.
For more detail see [Publisher](artist_tools#publisher).
Items in left column are instances you will be publishing. You can disable them
by clicking on square next to them. Green square indicate they are ready for
publishing, red means something went wrong either during collection phase
or publishing phase. Empty one with gray text is disabled.
See that in this case we are publishing from scene file `model_test_v01.mb` in
Maya model named `modelMain (ben)` (next item). Publishing of workfile is
currenly disabled (last item).
Right column lists all tasks that are run during collection, validation,
extraction and integration phase. White items are optional and you can disable
them by clicking on them.
Lets do dry-run on publishing to see if we pass all validators. Click on flask
icon at the bottom. Validators are run. Ideally you will end up with everything
green in validator section.
### Fixing problems
To make things interesting, I intentionally forgot to freeze transformations
on `sphere_GEO` as I know it will trigger validator designed to check just this.
![Failed Model Validator](assets/maya-model_publish_error.jpg)
You can see our model is now marked red in left column and in right we have
red box next to `Transform Zero (Freeze)` validator.
You can click on arrow next to it to see more details:
![Failed Model Validator details](assets/maya-model_freeze_error_details.jpg)
From there you can see in **Records** entry that there is problem with `sphere_GEO`.
Some validators have option to fix problem for you or just select objects that
cause trouble. This is the case with our failed validator.
In main overview you can notice little up arrow in a circle next to validator
name. Right click on it and you can see menu item `select invalid`. This
will select offending object in Maya.
Fix is easy. Without closing Publisher window we just freeze transformations.
Then we need to reset it to make it notice changes we've made. Click on arrow
circle button at the bottom and it will reset Publisher to initial state. Run
validators again (flask icon) to see if everything is ok.
It should be now. Write some comment if you want and click play icon button
when ready.
Publish process will now take its course. Depending on data you are publishing
it can take a while. You should end up with everything green and message
**Finished successfully ...** You can now close publisher window.
To check for yourself that model is published, open
[Asset Loader](artist_tools#loader) - **Pype → Load...**.
There you should see your model, named `modelMain`.
## Look development
Look development in Pype is easy. It helps you with versioning different
kinds of shaders and easy switching between them.
Let se how it works.
### Loading model
In this example I have already published model of Buddha. To see how to publish
model with Pype see [Publishing Model](artist_hosts_maya#publishing-models).
First of lets start with empty scene. Now go **Pype → Load...**
![Model loading](assets/maya-model_loading.jpg)
Here I am loading `modelBuddha`, its version 1 for asset **foo**. Just right-click
on it and select **Reference (abc)**. This will load model into scene as alembic.
Now you can close Loader window.
### Creating look
Now you can create whatever look you want. Assign shaders, textures, etc. to model.
In my case, I assigned simple Arnolds _aiSurfaceShader_ and changed its color to red.
![Look Dev - Red Buddha](assets/maya-look_dev-red_buddha.jpg)
I am quite happy with it so I want to publish it as my first look.
### Publishing look
Select your model in outliner and ho **Pype → Create...**. From there
select **Look**. Make sure `use selection` checkbox is checked.
Mine subset name is `Main`. This will create _Look instance_ with a name **lookMain**.
Close _Creator_ window.
Now save your scene, give it some sensible name. Next, go **Pype → Publish**.
This process is almost identical as publishing models, only different _Validators_
and other plugins will be used.
This should be painless and cause no trouble so go ahead, click play icon button at
the bottom and it will publish your look.
:::note publishing multiple looks
You can reference same model into scene multiple times, change materials on every
instance with what you need. Then on every model create _Look instance_. When
publishing all those _Look instances_ will be published at same time.
:::
### Loading looks into models
Now lets see how look are applied. Start new empty scene, load your published
model there as before (using _Reference (abc)_). If you didn't notice until now,
there are few yellow icons in left shelf:
![Maya - shortcut icons](assets/maya-shortcut_buttons.jpg)
Those are shortcuts for **Look Manager**, [Work Files](artist_tools.md#workfiles),
[Load](artist_tools.md#loader), and [Manage (Inventory)](artist_tools.md#inventory).
Those can be found even in top menu, but that depends on your studio setup.
You are interested now in **Look Manager** - first item with brush icon. Select
your Buddha model and open **Look Manager**.
![Maya - Look Manager](assets/maya-look_dev-look_manager.jpg)
This is **Look Manager** window. Yours would be empty until you click **Get All Assets**
or **Get Assets From Selection**. You can use later to quick assign looks if you have
multiple assets loaded in scene. Click on one of those button now.
You should now see all assets and their subsets loaded in scene, and on right side
all applicable published looks.
Select you asset and on the right side right click on `Main` look. Apply it.
You notice that Buddha model is now red, materials you've published are now applied
to it.
That way you can create looks as you want and version them using Pype.
## Setting scene data
Maya settings concerning framerate, resolution and frame range are handled by
Pype. If set correctly in Ftrack, Maya will validate you have correct fps on
scene save and publishing offering way to fix it for you.
For resolution and frame range, use **Pype → Reset Frame Range** and
**Pype → Reset Resolution**
## Creating rigs with Pype
Creating and publishing rigs with Pype follows similar workflow as with
other data types. Create your rig and mark parts of your hierarchy in sets to
help Pype validators and extractors to check it and publish it.
### Preparing rig for publish
When creating rigs, it is recommended (and it is in fact enforced by validators)
to separate bones or driving objects, their controllers and geometry so they are
easily managed. Currently Pype doesn't allow to publish model at the same time as
its rig so for demonstration purposes, I'll first create simple model for robotic
arm, just made out of simple boxes and I'll publish it.
![Maya - Simple model for rigging](assets/maya-rig_model_setup.jpg)
For more information about publishing models, see [Publishing models](artist_hosts_maya#publishing-models).
Now lets start with empty scene. Load your model - **Pype → Load...**, right
click on it and select **Reference (abc)**.
I've created few bones and their controllers in two separate
groups - `rig_GRP` and `controls_GRP`. Naming is not important - just adhere to
your naming conventions.
Then I've put everything into `arm_rig` group.
When you've prepared your hierarchy, it's time to create *Rig instance* in Pype.
Select your whole rig hierarchy and go **Pype → Create...**. Select **Rig**.
Set is created in your scene to mark rig parts for export. Notice that it has
two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
and geometry to `out_SET`. You should end up with something like this:
![Maya - Rig Hierarchy Example](assets/maya-rig_hierarchy_example.jpg)
### Publishing rigs
Publishing rig is done in same way as publishing everything else. Save your scene
and go **Pype → Publish**. When you run validation you'll mostly run at first into
few issues. Although number of them will seem to be intimidating at first, you'll
find out they are mostly minor things easily fixed.
* **Non Duplicate Instance Members (ID)** - This will most likely fail because when
creating rigs, we usually duplicate few parts of it to reuse them. But duplication
will duplicate also ID of original object and Pype needs every object to have
unique ID. This is easily fixed by **Repair** action next to validator name. click
on little up arrow on right side of validator name and select **Repair** form menu.
* **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
animator usually doesn't need to see them and they clutter his viewports. So
well behaving rig should have them hidden. **Repair** action will help here also.
* **Rig Controllers** will check if there are no transforms on unlocked attributes
of controllers. This is needed because animator should have ease way to reset rig
to it's default position. It also check that those attributes doesn't have any
incoming connections from other parts of scene to ensure that published rig doesn't
have any missing dependencies.
### Loading rigs
You can load rig with [Loader](artist_tools.md#loader). Go **Pype → Load...**,
select your rig, right click on it and **Reference** it.
## Point caches
Pype is using Alembic format for point caches. Workflow is very similar as
other data types.
### Creating Point Caches
To create point cache just create whatever hierarchy you want and animate it.
Select its root and Go **Pype → Create...** and select **Point Cache**.
After that, publishing will create corresponding **abc** files.
Example setup:
![Maya - Point Cache Example](assets/maya-pointcache_setup.png)
### Loading Point Caches
Loading point cache means creating reference to **abc** file with Go **Pype → Load...**.
Example result:
![Maya - Point Cache Example](assets/maya-pointcache_loaded.png)
## Set dressing in Maya
Set dressing is term for easily populate complex scenes with individual parts.
Pype allows to version and manage those sets.
### Publishing Set dress / Layout
Working with Set dresses is very easy. Just load your assets into scene with
[Loader](artist_tools.md#loader) (**Pype → Load...**). Populate your scene as
you wish, translate each piece to fit your need. When ready, select all imported
stuff and go **Pype → Create...** and select **Set Dress** or **Layout**.
This will create set containing your selection and marking it for publishing.
:::note set dress vs layout
Currently *set dress* and *layout* are functionally identical
:::
Now you can publish is with **Pype → Publish**.
### Loading Set dress / Layout
You can load Set dress / Layout using [Loader](artist_tools.md#loader)
(**Pype → Load...**). Select you layout or set dress, right click on it and
select **Reference Maya Ascii (ma)**. This will populate your scene with all those
models you've put into layout.
## Rendering with Pype
Pype in Maya can be used for submitting renders to render farm and for their
subsequent publishing. Right now Pype support [AWS Thinkbox Deadline](https://www.awsthinkbox.com/deadline)
and [Virtual Vertex Muster](https://www.vvertex.com/overview/).
* For setting up Muster support see [admin section](admin_config#muster)
* For setting up Deadline support see [here](admin_config#aws-thinkbox-deadline)
:::note Muster login
Muster is now configured so every user must log in to get authentication support. If Pype founds out this token is missing or expired, it will ask again for credentials.
:::
### Creating basic render setup
If you want to submit your render to farm, just follow these simple steps.
#### Preparing scene
Lets start with empty scene. First I'll pull in my favorite Buddha model.
**Pype → Load...**, select model and right+click to pop up context menu. From
there just click on **Reference (abc)**.
Next, I want to be sure that I have same frame range as is set on shot I am working
on. To do this just **Pype → Reset Frame Range**. This should set Maya timeline to same
values as they are set on shot in *Ftrack* for example.
I have my time set, so lets create some animation. We'll turn Buddha model around for
50 frames (this is length of my timeline).
Select model, go to first frame, key Y axis rotation, go to last frame, enter 360 to
**Channel Editor** Y rotation, key it and its done. If you are not sure how to do it,
you are probably reading wrong documentation.
Now let set up lights, ground and camera. I am lazy so I create Arnolds Skydome light:
**Arnold → Lights → Skydome Light**. As ground simple Plane will suffice and I'll set
my perspective view as I like and create new camera from it (`CTRL+SHIFT+C`) and rename
it from `persp1` to `mainCamera`.
One last thing, I'll assign basic *aiSurfaceShader* to my Buddha and do some little
tweaks on it.
#### Prepare scene for submission
As we have working simple scene we can start preparing it for rendering. Pype is fully utilizing
Render Setup layers for this. First of all, we need to create *Render instance* to tell Pype what
to do with renders. You can easily render locally or on render farm without it, but *Render instance*
is here to mark render layers you want to publish.
Lets create it. Go **Pype → Create...**. There select **Render** from list. If you keep
checked **Use selection** it will use your current Render Layers (if you have them). Otherwise,
if no render layers is present in scene, it will create one for you named **Main** and under it
default collection with `*` selector.
No matter if you use *Deadline* or *Muster*, Pype will try to connect to render farm and
fetch machine pool list.
:::note Muster login
This might fail on *Muster* in the event that you have expired authentication token. In that case, you'll be presented with login window. Nothing will be created in the scene until you log in again and do create **Render** again.
:::
So now my scene now looks like this:
![Maya - Render scene Setup](assets/maya-render_setup.jpg)
You can see that it created `renderingMain` set and under it `LAYER_Main`. This set corresponds to
**Main** render layer in Render Setup. This was automatically created because I had not created any
render layers in scene before. If you already have layers and you use **Use selection**, they will
appear here, prefixed with `LAYER_`. Those layer set are created whenever you create new layer in
Render Setup and are deleted if you delete layer in Render Setup. However if you delete `LAYER_` set,
layer in Render Setup isn't deleted. It just means it won't be published.
Creating *Render instance* will also set image prefix in render settings to Pype defaults based on
renderer you use - for example if you render with Arnold, it is `maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>`.
There are few setting on *Render instance* `renderingMain` in **Attributes Editor**:
![Maya - Render attributes](assets/maya-renderglobals.jpg)
Few options that needs explaining:
* `Primary Pool` - here is list of pool fetched from server you can select from.
* `Suspend publish Job` - job sent to farm will not start render automatically
but is in *waiting* state.
* `Extend Frames` - if checked it will add new frames to previous render, so you can
extend previous image sequence.
* `Override Existing Frame` - will overwrite file in destination if they exists
* `Priority` is priority of job on farm
* `Frames Per Task` is number of sequence division between individual tasks (chunks)
making one job on farm.
Now if you run publish, you notice there is in right column new item called
`Render Layers` and in it there is our new layer `Main (999_abc_0010) [1-10]`. First part is
layer name, second `(999_abc_0010)` is asset name and rest is frame range.
![Maya - Render Publish errors](assets/maya-render_publish_detail1.jpg)
You see I already tried to run publish but was stopped by few errors. Lets go
through them one by one just to see what we need to set up further in scene for
successful publish.
**No Default Cameras Renderable** is telling me:
```fix
Renderable default cameras found: [u'|persp|perspShape']
```
and so can be resolved by simple change in *Main* layer render settings.
All I have to do is just remove the `persp` camera from render settings and add there correct camera.
This leaves me only with **Render Settings** error. If I click on it to see
details, I see it has problem with animation not being enabled:
```fix
Animation needs to be enabled. Use the same frame for start and end to render single frame
```
Go to **Render Settings**, select your render layer and in **Common** tab change
in **File Output** `Frame/Animation ext` to whatever you want, just not _Single Frame_.
Set **Frame Range** `Start frame` and `End frame` according your needs.
If you run into problems with *image file prefix* - this should be set correctly when
creating *Render instance*, but you can tweak it. It needs to begin with `maya/<Scene>` token
to avoid render conflicts between DCCs. It needs to have `<RenderLayer>` or `<Layer>` (vray) and
`<RenderPass>` or `<Aov>` (vray). If you have more then one renderable cameras, add `<Camera>` token.
Sane default for arnold, redshift or renderman is:
```fix
maya/<RenderLayer>/<RenderLayer>_<RenderPass>
```
and for vray:
```fix
maya/<Layer>/<Layer>
```
Doing **Pype → Reset Resolution** will set correct resolution on camera.
Scene is now ready for submission and should publish without errors.
:::tip what happens when I publish my render scene
When publishing is finished, job is created on farm. This job has one more dependent job connected to itself.
When render is finished, this other job triggers in and run publish again, but this time it is publishing rendered image sequence and creating quicktime movie for preview from it. Only those rendered sequences that have **beauty** AOV get preview as it doesn't make sense to make it for example from cryptomatte.
:::
### Attaching render to subset
You can create render that will be attached to another subset you are publishing, rather than being published on it's own. Let's assume, you want to render a model turnaround.
In the scene from where you want to publish your model create *Render subset*. Prepare your render layer as needed and then drag
model subset (maya set node) under corresponding `LAYER_` set under *Render instance*. During publish, it will submit this render to farm and
after it is rendered, it will be attached to your model subset.
## Render Setups
### Publishing Render Setups
Pype can publish whole **Render Settings** setup. You can then version in and load it to
any Maya scene. This helps TDs to distribute per asset/shots render settings for Maya.
To publish render settings, go **Pype → Create...** and select **Render Setup Preset**.
In your scene will appear set `rendersetup<subset>`. This one has no settings, only its presence
in scene will trigger publishing of render settings.
When you publish scene, current settings in **Render Settings** will be serialized to json file.
### Loading Render Setups
In any scene, you can load published render settings with **Pype → Load...**. Select your published
render setup settings, right+click on it and select **Load RenderSetup template**.
This will load and parse json file and apply all setting there to your Render Setting.
:::warning
This will overwrite all setting you already have.
:::
## Reviews
Pype supports creating review video for almost any type of data you want to publish.
What we call review video is actually _playblast_ or _capture_ (depending on terminology
you are familiar with) made from pre-defined camera in scene. This is very useful
in cases where you want to add turntable preview of your model for example. But it can
be used to generate preview for animation, simulations, and so on.
### Setting scene for review extraction
Lets see how review publishing works on simple scene. We will publish model with
turntable preview video.
I'll be using Stanford University dragon model. Start with empty scene.
Create your model, import it or load from Pype. I'll just import model as OBJ
file.
After we have our model in, we need to set everything to be able to publish it
as model - for detail see [Publishing models](artist_hosts_maya#publishing-models).
To recap - freeze transforms, rename it to `dragon_GEO` and put it into group
`dragon_GRP`. Then select this group and **Pype → Create...** and choose **Model**.
Now, lets create camera we need to generate turntable video. I prefer to animate
camera itself and not model because all animation keys will be associated with camera
and not model we want to publish.
I've created camera, named it `reviewCamera` and parent it under `reviewRotation_LOC`
locator. I set my timeline to 50 frames, key `reviewRotation_LOC` Y axis on frame
1 to 0 and on frame 50 to 360. I've also set animation curve between those two keys
to linear.
To mark camera to be used for review, select camera `reviewCamera` and go **Pype → Create...**
and choose **Review**.
This will create set `review<subset>` including selected camera. You can set few options
on this set to control review video generation:
* `Active` - control on/off state
* `Frame Start` - starting frame for review
* `Frame End` - end frame for review
* `Handles` - number of handle frame before and after
* `Step` - number of steps
* `Fps` - framerate
This is my scene:
![Maya - Review model setup](assets/maya-model_review_setup.jpg)
_* note that I had to fix UVs and normals on Stanford dragon model as it wouldn't pass
model validators_
### Publishing model with review
You can now publish your model and generate review video. Go **Pype → Publish...**,
validate if you will, and publish it. During publishing, Maya will create _playblast_
for whole frame range you've specified, then it will pass those frames to _ffmpeg_.
That will create video file, pass it to another extractor creating burnins in it
and finally uploading this video to ftrack with your model (or other type) published
version. All parts of this process - like what burnins, what type of video file,
settings for Maya playblast - can be customized by your TDs. For more information
about customizing review process refer to [admin section](admin_presets_plugins).
## Working with Yeti in Pype
Pype can work with [Yeti](https://peregrinelabs.com/yeti/) in two data modes.
It can handle Yeti caches and Yeti rigs.
### Creating and publishing Yeti caches
Let start by creating simple Yeti setup, just one object and Yeti node. Open new
empty scene in Maya and create sphere. Then select sphere and go **Yeti → Create Yeti Node on Mesh**
Open Yeti node graph **Yeti → Open Graph Editor** and create setup like this:
![Maya - Yeti Basic Graph](assets/maya-yeti_basic_setup.jpg)
It doesn't matter what setting you use now, just select proper shape in first
*Import* node. Select your Yeti node and create *Yeti Cache instance* - **Pype → Create...**
and select **Yeti Cache**. Leave `Use selection` checked. You should end up with this setup:
![Maya - Yeti Basic Setup](assets/maya-yeti_basic_setup_outline.jpg)
You can see there is `yeticacheDefault` set. Instead of *Default* it could be named with
whatever name you've entered in `subset` field during instance creation.
We are almost ready for publishing cache. You can check basic settings by selecting
Yeti cache set and opening *Extra attributes* in Maya **Attribute Editor**.
![Maya - Yeti Basic Setup](assets/maya-yeti_cache_attributes.jpg)
Those attributes there are self-explanatory, but:
- `Preroll` is number of frames simulation will run before cache frames are stored.
This is usefull to "steady" simulation for example.
- `Frame Start` from what frame we start to store cache files
- `Frame End` to what frame we are storing cache files
- `Fps` of cache
- `Samples` how many time samples we take during caching
You can now publish Yeti cache as any other types. **Pype → Publish**. It will
create sequence of `.fur` files and `.fursettings` metadata file with Yeti node
setting.
### Loading Yeti caches
You can load Yeti cache by **Pype → Load ...**. Select your cache, right+click on
it and select **Load Yeti cache**. This will create Yeti node in scene and set its
cache path to point to your published cache files. Note that this Yeti node will
be named with same name as the one you've used to publish cache. Also notice that
when you open graph on this Yeti node, all nodes are as they were in publishing node.
### Creating and publishing Yeti Rig
Yeti Rigs are working in similar way as caches, but are more complex and they deal with
other data used by Yeti, like geometry and textures.
Let's start by [loading](artist_hosts_maya#loading-model) into new scene some model.
I've loaded my Buddha model.
Create select model mesh, create Yeti node - **Yeti → Create Yeti Node on Mesh** and
setup similar Yeti graph as in cache example above.
Then select this Yeti node (mine is called with default name `pgYetiMaya1`) and
create *Yeti Rig instance* - **Pype → Create...** and select **Yeti Cache**.
Leave `Use selection` checked.
Last step is to add our model geometry to rig instance, so middle+drag its
geometry to `input_SET` under `yetiRigDefault` set representing rig instance.
Note that its name can differ and is based on your subset name.
![Maya - Yeti Rig Setup](assets/maya-yeti_rig.jpg)
Save your scene and ready for publishing our new simple Yeti Rig!
Go to publish **Pype → Publish** and run. This will publish rig with its geometry
as `.ma` scene, save Yeti node settings and export one frame of Yeti cache from
the beginning of your timeline. It will also collect all textures used in Yeti
node, copy them to publish folder `resource` directory and set *Image search path*
of published node to this location.
:::note Collect Yeti Cache failure
If you encounter **Collect Yeti Cache** failure during collecting phase, and the error is like
```fix
No object matches name: pgYetiMaya1Shape.cbId
```
then it is probably caused by scene not being saved before publishing.
:::
### Loading Yeti Rig
You can load published Yeti Rigs as any other thing in Pype - **Pype → Load ...**,
select you Yeti rig and right+click on it. In context menu you should see
**Load Yeti Cache** and **Load Yeti Rig** items (among others). First one will
load that one frame cache. The other one will load whole rig.
Notice that although we put only geometry into `input_SET`, whole hierarchy was
pulled inside also. This allows you to store complex scene element along Yeti
node.
:::tip auto-connecting rig mesh to existing one
If you select some objects before loading rig it will try to find shapes
under selected hierarchies and match them with shapes loaded with rig (published
under `input_SET`). This mechanism uses *cbId* attribute on those shapes.
If match is found shapes are connected using their `outMesh` and `outMesh`. Thus you can easily connect existing animation to loaded rig.
:::

View file

@ -0,0 +1,161 @@
---
id: artist_hosts_nuke
title: Nuke
sidebar_label: Nuke
---
:::important
After Nuke starts it will automatically **Apply All Settings** for you. If you are sure the settings are wrong just contact your supervisor and he will set them correctly for you in project database.
:::
## Pype global tools
- [Set Context](artist_tools.md#set-context)
- [Work Files](artist_tools.md#workfiles)
- [Create](artist_tools.md#creator)
- [Load](artist_tools.md#loader)
- [Manage (Inventory)](artist_tools.md#inventory)
- [Publish](artist_tools.md#publisher)
- [Library Loader](artist_tools.md#library-loader)
## Nuke specific tools
<div class="row markdown">
<div class="col col--6 markdown">
### Set Frame Ranges
Use this feature in case you are not sure the frame range is correct.
##### Result
- setting Frame Range in script settings
- setting Frame Range in viewers (timeline)
</div>
<div class="col col--6 markdown">
![Set Frame Ranges](assets/nuke_setFrameRanges.png)
</div>
</div>
<figure>
![Set Frame Ranges Timeline](assets/nuke_setFrameRanges_timeline.png)
<figcaption>
1. limiting to Frame Range without handles
2. **Input** handle on start
3. **Output** handle on end
</figcaption>
</figure>
### Set Resolution
<div class="row markdown">
<div class="col col--6 markdown">
This menu item will set correct resolution format for you defined by your production.
##### Result
- creates new item in formats with project name
- sets the new format as used
</div>
<div class="col col--6 markdown">
![Set Resolution](assets/nuke_setResolution.png)
</div>
</div>
### Set Colorspace
<div class="row markdown">
<div class="col col--6 markdown">
This menu item will set correct Colorspace definitions for you. All has to be configured by your production (Project coordinator).
##### Result
- set Colorspace in your script settings
- set preview LUT to your viewers
</div>
<div class="col col--6 markdown">
![Set Colorspace](assets/nuke_setColorspace.png)
</div>
</div>
### Apply All Settings
<div class="row markdown">
<div class="col col--6 markdown">
It is usually enough if you once per while use this option just to make yourself sure the workfile is having set correct properties.
##### Result
- set Frame Ranges
- set Colorspace
- set Resolution
</div>
<div class="col col--6 markdown">
![Apply All Settings](assets/nuke_applyAllSettings.png)
</div>
</div>
### Build First Work File
<div class="row markdown">
<div class="col col--6 markdown">
This tool will create your first version of workfile and save it to correct folder with correct file name convention. It will look into database and get all last [versions](artist_concepts.md#version) of available [subsets](artist_concepts.md#subset).
</div>
<div class="col col--6 markdown">
![Build First Work File](assets/nuke_buildFirstWorkfile.png)
</div>
</div>
##### Result
<div class="row markdown">
<div class="col col--6 markdown">
- adds all last versions of subsets (rendered image sequences) as read nodes
- adds available color transformations under Read nodes
- adds publishable write node as `renderMain` subset
</div>
<div class="col col--6 markdown">
<figure>
![Set Frame Ranges Timeline](assets/nuke_autoBuild.png)
<figcaption>
Orange arrow is pointing at `Lut` groups.
</figcaption>
</figure>
</div>
</div>

View file

@ -0,0 +1,284 @@
---
id: artist_hosts_nukestudio
title: Hiero
sidebar_label: Hiero / Nuke Studio
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
:::note
All the information also applies to **_Nuke Studio_**, but for simplicity we only refer to hiero. The workflows are identical for both. We are supporting versions **`11.0`** and above.
:::
## Hiero specific tools
<div class="row markdown">
<div class="col col--6 markdown">
### Create Default Tags
This tool will recreate all necessary pype tags needed for successful publishing. It is automatically ran at start of the hiero. Use this tool to manually re-create all the tags if you accidentaly delete them, or you want to reset them to default values.
</div>
<div class="col col--6 markdown">
![Default Tags](assets/nukestudio_defaultTags.png)
</div>
</div>
#### Result
- Will create tags in Tags bin in case there were none
- Will set all tags to default values if they have been altered
## Publishing Shots
<div class="row markdown">
<div class="col col--6 markdown">
With Pype, you can use hiero as a starting point for creating project hierarchy in avalon and ftrack database (episodes, sequences, shots, folders etc.), publishing plates, reference quicktimes, audio and various soft effects that will be evailable later on for compositors and 3d artist to use.
There are two ways to `Publish` data and create shots in database from hiero. Use either context menu on right clicking selected clips or go to top `menu > Pype > Publish`.
</div>
<div class="col col--6 markdown">
![Clips naming](assets/nukestudio_basic_clipNaming.png)
</div>
</div>
Keep in mind that the publishing currently works on selected shots
Shot names for all the related plates that you want to publish (subsets) has to be the same to be correctly paired together (as it is shown in image).
Note the layer **review** which contains `plateMainReview`.
This media is just h264,1920x1080 video for tha will be used as preview of the actua `plateMain` subset and will be uploaded to Ftrack. We explain how to work with review tag in [**Reviewing**](#reviewing).
:::important
To to successfuly publish a shot from hiero:
1. At least one clip of your shot must be tagged with `Hierarchy`, `subset` and `handleStart/End`.
2. Your source media must be pre-cut to correct length (including handles)
:::
### Tagging
Pype's custom tags are used for defining shot parameters and to define which clips and how they are going to be published.
If you want to add any properties to your clips you'll need to adjust values on the given tag and then drag it onto the clip.
<figure>
![Tags basic](assets/nukestudio_tagsToClips_basic.png)
<figcaption>
1. double click on preferable tag and drag&drop it to selected clip(s)
2. Basic set of tags on clip (usually subset: plateMain)
3. Additionally select clip and edit its parameters
4. Edit parameters here but Do not touch `family`
</figcaption>
</figure>
:::important
Only clips with `subset` will be directly processed for publishing.
:::
### Custom Tags Details
#### Asset related
| Icon | Description | Editable | Options |
| ------------------- | ---------------------------------------------------------------------------------- | ------------------------------------- | ---------------------------------------------------------------------------------------- |
| ![Hierarchy][hi] | Define parent hierarchy of the shot. Usually combined with one of subset tags. | root, folder, sequence, episode, shot | example: {sequence} = name of hiero sequence or overwrite by any text without `-` or `_` |
| ![Frame Start][fst] | Set start frame of the shot. Using `"source"` will keep original frame numbers. | number | int `number` or `"source"` |
#### Subsets
| Icon | Description | Editable | Options |
| ------------------ | ------------------------------------------------------------------------------ | -------- | --------------------------------- |
| ![Review][rew] | Choose which track holds review quicktime for the given shot. | track | `"review"` or other track name |
| ![Plate Main][pmn] | Main plate subset identifier | subset | `"main"` or other |
| ![Plate FG][pfg] | Foreground plate subset identifier (comped over the main plate) | subset | `"Fg##"` or other |
| ![Plate BG][pbg] | Background plate subset identifier (comped under the main plate) | subset | `"Bg##"` or other |
| ![Plate Ref][ref] | Reference plate subset identifier | subset | `"Ref"` or other |
#### Subset's attributes
| Icon | Description | Editable | Options |
| ------------------ | --------------------------------------------------------------------------------- | ------------------- | ----------------------------- |
| ![Resolution][rsl] | Use source resolution instead of sequence settings. | none | |
| ![Retiming][rtm] | Publish retime metadata to shot if retime or time-warp found on clip | marginIn, marginOut | int `number` frame cushioning |
| ![Lens][lns] | Specify lens focal length metadata (work in progress) | focalLengthMm | int `number` |
#### Handles
| Icon | Description | Editable | Options |
| --------------------- | ---------------------------------------------------------------------------- | -------- | -------------------------- |
| ![Handles Start][ahs] | Handles at the start of the clip/shot | value | change to any int `number` |
| ![Handles End][ahe] | Handles at the end of a clip/shot | value | change to any int `number` |
[hi]: assets/nks_icons/hierarchy.png
[ahs]: assets/nks_icons/3_add_handles_start.png
[ahe]: assets/nks_icons/1_add_handles_end.png
[rsl]: assets/nks_icons/resolution.png
[rtm]: assets/nks_icons/retiming.png
[rew]: assets/nks_icons/review.png
[pmn]: assets/nks_icons/z_layer_main.png
[pfg]: assets/nks_icons/z_layer_fg.png
[pbg]: assets/nks_icons/z_layer_bg.png
[lns]: assets/nks_icons/lense1.png
[fst]: assets/nks_icons/frame_start.png
[ref]: assets/nks_icons/reference.png
### Handles
Pype requires handle information in shot metadata even if they are set to 0.
For this you need to add handles tags to the main clip (Should be the one with Hierarchy tag).
This way we are defining a shot property. In case you wish to have different
handles on other subsets (e.g. when plateBG is longer than plateFG) you can add handle tags with different value to this longer plate.
If you wish to have different handles length (say 100) than one of the default tags, simply drag `start: add 10 frames` to your clip
and then go to clips tags, find the tag, then replace 10 for 100 in name and also change value to 100.
This is also explained following tutorial [`Extending premade handles tags`](#extending-premade-handles-tags)
:::caution
Even if you don't need any handles you have to add `start: add 0 frames` and `end: add 0 frames` tags to the clip with Hierarchy tag.
:::
### Retiming
Pype is also able to publish retiming parameters into the database.
Any clip with **editorial**/**retime** or **TimeWarp** soft effect has to be tagged with `Retiming` tag, if you want this information preserved during publishing.
Any animation on **TimeWarp** is also preserved and reapplied in _Nuke_.
You can only combine **retime** and with a single **Timewarp**.
### Reviewing
There are two ways to publish reviewable h264 mov into Pype (and ftrack).
<Tabs
defaultValue="reviewtag"
values={[
{label: 'Review Tag', value: 'reviewtag'},
{label: 'Sidecar File', value: 'sidecar'},
]}>
<TabItem value="reviewtag">
The first one uses the Review Tag pointing to the track that holds the reviewable quicktimes for plates.
This tag metadata has `track` key inside that points to `review` track by default. If you drop this tag onto any publishable clip on the timeline you're telling pype "you will find quicktime version of this plate on `review` track (clips must have the same name)"
In the image on the right we dropped it to **plateMain** clip. Then we renamed the layer tha hold reviewable quicktime called `plateMainReview`. YOu can see that the clip names are the same.
<figure>
![Reviewing](assets/nukestudio_reviewing.png)
<figcaption>
1. `- review` suffix is added to publishing item label if any reviewable file is found
2. `plateMain` clip is holding the Review tag
3. layer name is `review` as it is used as default in _Review_ Tag in _track_
4. name of clip is the same across all subsets
</figcaption>
</figure>
</TabItem>
<TabItem value="sidecar">
Second way would be to add the **h264 mov 1920x1080** into the same folder
as image sequence. The name of the file has to be the same as image sequence.
Publisher will pick this file up and add it to the files list during collecting.
This will also add `"- review"` to instance label in **Publish**.
Example:
- img seq: `image_sequence_name.0001.exr`
- mov: `image_sequence_name.mov`
</TabItem>
</Tabs>
--------------
### LUT Workflow
<div class="row markdown">
<div class="col col--6 markdown">
It is possible to publish hiero soft effects for compositors to use later on. You can add the effect to a particular clip or to whole layer as shows on the picture. All clips
below the `Video 6` layer (green arrow) will be published with the **lut** subset which combines all the colour corrections from he soft effects. Any clips above the `Video 6` layer will have no **lut** published with them.
</div>
<div class="col col--6 markdown">
![Reviewing](assets/nukestudio_softEffects.png)
</div>
</div>
Any external Lut files used in the soft effects will be copied over to `resources` of the published subset folder `lutPlateMain` (in our example).
:::note
<div class="row markdown">
<div class="col col--6 markdown">
You cannot currently publish soft effects on their own because at the moment we only support soft effects as a part of other subset publishing. Image is demonstrating successful publishing.
</div>
<div class="col col--6 markdown">
![Reviewing](assets/nukestudio_lutSucess.png)
</div>
</div>
:::
## Tutorials
### Basic publishing with soft effects
<iframe src="https://drive.google.com/file/d/1-BN6ia9ic9om69mq3T4jiwZnbrBGdyNi/preview"></iframe>
### Extending premade handles tags
<iframe src="https://drive.google.com/file/d/1-BexenWWmSURA-QFgxkoZtyxMEhZHOLr/preview"></iframe>

View file

@ -0,0 +1,111 @@
---
id: artist_hosts_photoshop
title: Photoshop
sidebar_label: Photoshop
---
## Available Tools
- [Work Files](artist_tools.md#workfiles)
- [Create](artist_tools.md#creator)
- [Load](artist_tools.md#loader)
- [Publish](artist_tools.md#publisher)
- [Manage](artist_tools.md#inventory)
## Setup
To install the extension download [Extension Manager Command Line tool (ExManCmd)](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#option-2---exmancmd).
```
ExManCmd /install {path to pype-setup}/repos/avalon-core/avalon/photoshop/extension.zxp
```
## Usage
When you launch Photoshop you will be met with the Workfiles app. If dont have any previous workfiles, you can just close this window.
In Photoshop you can find the tools in the `Avalon` extension:
![Extension](assets/photoshop_extension.PNG)
You can show the extension panel by going to `Window` > `Extensions` > `Avalon`.
### Create
When you have created an image you want to publish, you will need to create special groups or tag existing groups. To do this open the `Creator` through the extensions `Create` button.
![Creator](assets/photoshop_creator.PNG)
With the `Creator` you have a variety of options to create:
- Check `Use selection` (A dialog will ask whether you want to create one image per selected layer).
- Yes.
- No selection.
- This will create a single group named after the `Subset` in the `Creator`.
- Single selected layer.
- The selected layer will be grouped under a single group named after the selected layer.
- Single selected group.
- The selected group will be tagged for publishing.
- Multiple selected items.
- Each selected group will be tagged for publishing and each layer will be grouped individually.
- No.
- All selected layers will be grouped under a single group named after the `Subset` in the `Creator`.
- Uncheck `Use selection`.
- This will create a single group named after the `Subset` in the `Creator`.
### Publish
When you are ready to share some work, you will need to publish. This is done by opening the `Pyblish` through the extensions `Publish` button.
![Publish](assets/photoshop_publish.PNG)
This tool will run through checks to make sure the contents you are publishing is correct. Hit the "Play" button to start publishing.
You may encounter issues with publishing which will be indicated with red squares. If these issues are within the validation section, then you can fix the issue. If there are issues outside of validation section, please let the Pype team know.
#### Repair Validation Issues
All validators will give some description about what the issue is. You can inspect this by going into the validator through the arrow:
![Inspect](assets/photoshop_publish_inspect.PNG)
You can expand the errors by clicking on them for more details:
![Expand](assets/photoshop_publish_expand.PNG)
Some validator have repair actions, which will fix the issue. If you can identify validators with actions by the circle icon with an "A":
![Actions](assets/photoshop_publish_actions.PNG)
To access the actions, you right click on the validator. If an action runs successfully, the actions icon will turn green. Once all issues are fixed, you can just hit the "Refresh" button and try to publish again.
![Repair](assets/photoshop_publish_repair.gif)
### Load
When you want to load existing published work, you can load in smart layers through the `Loader`. You can reach the `Loader` through the extension's `Load` button.
![Loader](assets/photoshop_loader.PNG)
The supported families for Photoshop are:
- `image`
To load an image, right-click on the subset you want and choose a representation:
![Loader](assets/photoshop_loader_load.gif)
### Manage
Now that we have some images loaded, we can manage which version is loaded. This is done through the `Scene Inventory`. You can reach it through the extension's `Manage` button.
:::note
Loaded images has to stay as smart layers in order to be updated. If you rasterize the layer, you cannot update it to a different version.
:::
![Loader](assets/photoshop_manage.PNG)
You can switch to a previous version of the image or update to the latest.
![Loader](assets/photoshop_manage_switch.gif)
![Loader](assets/photoshop_manage_update.gif)

View file

@ -0,0 +1,40 @@
---
id: artist_hosts_unreal
title: Unreal
sidebar_label: Unreal
---
## Introduction
Pype supports Unreal in similar ways as in other DCCs Yet there are few specific you need to be aware of.
### Project naming
Unreal doesn't support project names starting with non-alphabetic character. So names like `123_myProject` are
invalid. If Pype detects such name it automatically prepends letter **P** to make it valid name, so `123_myProject` will become `P123_myProject`. There is also soft-limit on project name length to be shorter then 20 characters. Longer names will issue warning in Unreal Editor that there might be possible side effects.
## Pype global tools
Pype global tools can be found in *Window* main menu:
![Unreal Pype Menu](assets/unreal-avalon_tools.jpg)
- [Create](artist_tools.md#creator)
- [Load](artist_tools.md#loader)
- [Manage (Inventory)](artist_tools.md#inventory)
- [Publish](artist_tools.md#publisher)
- [Library Loader](artist_tools.md#library-loader)
## Static Mesh
### Loading
To import Static Mesh model, just choose **Pype → Load ...** and select your mesh. Static meshes are transfered as FBX files as specified in [Unreal Engine 4 Static Mesh Pipeline](https://docs.unrealengine.com/en-US/Engine/Content/Importing/FBX/StaticMeshes/index.html). This action will create new folder with subset name (`unrealStaticMeshMain_CON` for example) and put all data into it. Inside, you can find:
![Unreal Container Content](assets/unreal-container.jpg)
In this case there is **lambert1**, material pulled from Maya when this static mesh was published, **unrealStaticMeshCube** is the geometry itself, **unrealStaticMeshCube_CON** is a *AssetContainer* type and is there to mark this directory as Avalon Container (to track changes) and to hold Pype metadata.
### Publishing
Publishing of Static Mesh works in similar ways. Select your mesh in *Content Browser* and **Pype → Create ...**. This will create folder named by subset you've choosen - for example **unrealStaticMeshDefault_INS**. It this folder is that mesh and *Avalon Publish Instance* asset marking this folder as publishable instance and holding important metadata on it. If you want to publish this instance, go **Pype → Publish ...**

View file

@ -0,0 +1,184 @@
---
id: artist_publish
title: Publishing
sidebar_label: Publishing
---
## What is publishing?
A process of exporting particular data from your work scene to be shared with others.
Think of publishing as a checkpoint between two people, making sure that we catch mistakes as soon as possible and dont let them pass through pipeline step that would eventually need to be repeated if these mistakes are not caught.
Every time you want to share a piece of work with others (be it camera, model, textures, animation or whatever), you need to publish this data. The main reason is to save time down the line and make it very clear what can and cannot be used in production.
This process should mostly be handled by publishing scripts but in certain cases might have to be done manually.
Published assets should comply to these rules:
- Clearly named, based on internal naming conventions.
- Versioned (with master version created for certain types of assets).
- Immediately usable, without any dependencies to unpublished assets or work files.
- Immutable
All of these go into the publish folder for the given entity (shot, asset, sequence)
:::note
Keep in mind that while publishing the data might take you some extra time, it will save much more time in the long run when your colleagues dont need to dig through your work files trying to understand them and find that model you saved by hand.
:::
## Families:
The Instances are categorized into families based on what type of data they contain. Some instances might have multiple families if needed. A shot camera will for example have families 'camera' and 'review' to indicate that it's going to be used for review quicktime, but also exported into a file on disk.
Following family definitions and requirements are Pype defaults and what we consider good industry practice, but most of the requirements can be easily altered to suit the studio or project needs.
Here's a list of supported families
| Family | Comment | Example Subsets |
| ----------------------- | ------------------------------------------------ | ------------------------- |
| [Model](#model) | Cleaned geo without materials | main, proxy, broken |
| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty |
| [Rig](#rig) | Characters or props with animation controls | main, deform, sim |
| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim |
| [Layout](#layout) | Simple representation of the environment | main, |
| [Setdress](#setdress) | Environment containing only referenced assets | main, |
| [Camera](#camera) | May contain trackers or proxy geo | main, tracked, anim |
| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB |
| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 |
| MayaAscii | Maya publishes that don't fit other categories | |
| [Render](#render) | Rendered frames from CG or Comp | |
| RenderSetup | Scene render settings, AOVs and layers | |
| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane |
| Write | Nuke write nodes for rendering | |
| Image | Any non-plate image to be used by artists | Reference, ConceptArt |
| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt |
| Review | Reviewable video or image. | |
| Matchmove | Matchmoved camera, potentially with geometry | main |
| Workfile | Backup of the workfile with all it's content | uses the task name |
| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop |
| Yeticache | Cached out yeti fur setup | |
| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed |
| VrayProxy | Vray proxy geometry for rendering | |
| VrayScene | Vray full scene export | |
| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty |
| LUT | | |
| Nukenodes | | |
| Gizmo | | |
| Nukenodes | | |
| Harmony.template | | |
| Harmony.pallette | | |
### Model
Clean geometry without any material assignments. Published model can be as small as a single mesh, or as complex as a full building. That is purely up to the artist or the supervisor. Models can contain hierarchy defined by groups or nulls for better organisation.
Apart from model subsets, we also support LODs as extra level on top of subset. To publish LODs, you just need to prepare subsets for publishing names `modelMySubsetName_LOD##`, if pype finds `_LOD##` (hashes replaced with LOD level), it will automatically be considered a LOD of the given subset.
Example Subsets:
`modelMain`, `modelProxy`, `modelSculpt`, `modelBroken`, `modelMain_LOD01`, `modelMain_LOD02`
Example representations:
`.ABC`, `.MA`, `.MB`, `.BLEND`, `.OBJ`, `.FBX`
### Look
A package of materials, shaders, assignments, textures and attributes that collectively define a look of a model for rendering or preview purposes. This ca usually be applied only to the model is was authored for, or it's corresponding cache, however material sharing across multiple models is also possible. A look should be fully self-contained and ready for rendering.
Example Subsets:
`lookMain`, `lookProxy`, `lookWet`, `lookDirty`, `lookBlue`, `lookRed`
Example Representations:
`.MA + .JSON`, `.MTLX (yet unsupported)`, `.BLEND`
Please note that a look is almost never a single representation, but a combination of multiple.
For example in Maya a look consists of `.ma` file with the shaders, `.json` file which
contains the attributes and assignments and `/resources` folder with all the required textures.
### Rig
Characters or props with animation controls or other parameters, ready to be referenced into a scene and animated. Animation Rigs tend to be very software specific, but in general they tend to consist of Geometry, Bones or Joints, Controllers and Deformers. Pype in maya supports both, self-contained rigs, that include everything in one file, but also rigs that use nested references to bring in geometry, or even skeleton. By default we bake rigs into a single file during publishing, but that behaviour can be turned off to keep the nested references live in the animation scenes.
Example Subsets:
`rigMain`, `rigMocap`, `rigSim`, `rigCamera`, `rigMuscle`
Example Representations:
`.MA`, `.MB`, `.BLEND`, `.HDA`
### Assembly
A subset created by combining two or more smaller subsets into a composed bigger asset.
A good example would be a restaurant table asset with the cutlery and chairs included,
that will eventually be loaded into a restaurant Set. Instead of loading each individual
fork and knife for each table in the restaurant, we can first prepare `assemblyRestaurantTable` subset
which will contain the table itself, with cutlery, flowers, plates and chairs nicely arranged.
This table can then be loaded multiple times into the restaurant for easier scene management
and updates.
Extracted assembly doesn't contain any geometry directly, but rather information about all the individual subsets that are inside the assembly, their version and transformations. On top of that and alembic is exported which only holds any extra transforms and groups that are needed to fully re-create the original assembled scene.
Assembly ca also be used as a sort of collection of elements that are often used together in the shots. For example if we're set dressing lot's of forest shots, it would make sense to make and assembly of all the forest elements for scattering so we don't have to load them individually into each shot.
Example Subsets:
`assemblyTable`, `assemblyForestElements`, `assemblyRoof`
Example Representations:
`.ABC + .JSON`
### Setdress
Fully prepared environment scene assembled from other previously published assets. Setdress should be ready for rendering as is, including any instancing, material assignments and other complex setups the environment requires. Due to this complexity, setdress is currently only publishable in the native file format of the host where it was created. In maya that would be `.ma` or `.mb` file.
### Camera
Clean virtual camera without any proprietary rigging, or host specific information. Considering how widely across the hosts published cameras are used in production, published camera should ideally be as simple and clean as possible to ensure consistency when loaded into various hosts.
Example Representations:
`.MA`, `.ABC`
### Cache
Geometry or effect with baked animation. Cache is usually exported as alembic,
but can be potentially any other representation that makes sense in the given scenario.
Cache is defined by the artist directly in the fx or animation scene.
Example Subsets:
`assemblyTable`, `assemblyForestElements`, `assemblyRoof`
Example Representations:
`.ABC`, `.VDB`, `.BGEO`
### Animation
Published result of an animation created with a rig. Animation can be extracted
as animation curves, cached out geometry or even fully animated rig with all the controllers.
Animation cache is usually defined by a rigger in the rig file of a character or
by FX TD in the effects rig, to ensure consistency of outputs.
Example Subsets:
`animationBob_01`, `animationJack_02`, `animationVehicleA`
Example Representations:
`.MA`, `.ABC`, `.JSON`
### Yeti Cache
Cached out yeti fur simulation that originates from a yeti rig applied in the shot context.
### Yeti Rig
Yeti groom setup ready to be applied to a cached out character in the shot context.
### Render

View file

@ -0,0 +1,384 @@
---
id: artist_tools
title: Tools
sidebar_label: Tools
---
## Set Context
<div class="row markdown">
<div class="col col--6 markdown">
Any time your host app is open in defined context it can be changed to different hierarchy, asset or task within a project. This will allow you to change your opened session to any other asset, shot and tasks within the same project. This is useful particularly in cases where your host takes long time to start.
</div>
<div class="col col--6 markdown">
![workfiles_1](assets/tools_context_manager.png)
</div>
</div>
:::note
Notice that the window doesn't close after hitting `Accept` and confirming the change of context. This behaviour let's you keep the window open and change the context multiple times in a row.
:::
## Creator
### Details
Despite the name, Creator isn't for making new content in your scene, but rather taking what's already in it and creating all the metadata your content needs to be published.
In Maya this means creating a set with everything you want to publish and assigning custom attributes to it so it get's picked up during publishing stage.
In Nuke it's either converting an existing write node to a publishable one, or simply creating a write node with all the correct settings and outputs already set.
### Usage
1. select what you want to publish from your scenes
2. Open Creator from Avalon menu
3. Choose what family (data type) you need to export
4. Type the name for you export. This name is how others are going to be able to refer to this particular subset when loading it into their scenes. Every assets should have a Main subset, but can have any number of other variants.
5. Click on Create
* * *
## Loader
Loader loads published subsets into your current scene or script.
### Usage
1. open Loader from pipeline menu
2. select the asset where the subset you want to load is published
3. from subset list select the subset you want
4. right-click the subset
5. from action menu select what you want to do *(load, reference, ...)*
![tools_loader_1](assets/tools/tools_loader_1.png)
<div class="row markdown">
<div class="col col--6 markdown">
### Refresh data
Data are not auto-refreshed to avoid database issues. To refresh assets or subsets press refresh button.
</div>
<div class="col col--6 markdown">
![tools_loader_50](assets/tools/tools_loader_50.png)
</div>
</div>
### Load another version
Loader by default load last version, but you can of course load another versions. Double-click on the subset in the version column to expose the drop down, choose version you want to load and continue from point 4 of the [Usage](#usage-1).
<div class="row markdown">
<div class="col col--6 markdown">
![tools_loader_21](assets/tools/tools_loader_21.png)
</div>
<div class="col col--6 markdown">
![tools_loader_22](assets/tools/tools_loader_22.png)
</div>
</div>
### Filtering
#### Filter Assets and Subsets by name
To filter assets/subsets by name just type name or part of name to filter text input. Only assets/subsets containing the entered string remain.
- **Assets filtering example** *(it works the same for subsets)*:
<div class="row markdown">
<div class="col col--6 markdown">
![tools_loader_4](assets/tools/tools_loader_4-small.png)
</div>
<div class="col col--6 markdown">
![tools_loader_5](assets/tools/tools_loader_5-small.png)
</div>
</div>
#### Filter Subsets by Family
<div class="row markdown">
<div class="col col--6 markdown">
To filter [subsets](artist_concepts#subset) by their [families](artist_publish#families) you can use families list where you can check families you want to see or uncheck families you are not interested in.
</div>
<div class="col col--6 markdown">
![tools_loader_30](assets/tools/tools_loader_30-small.png)
</div>
</div>
### Subset groups
Subsets may be grouped which can help to make the subset list more transparent. You can toggle visibility of groups with `Enable Grouping` checkbox.
![tools_loader_40](assets/tools/tools_loader_40-small.png)
#### Add to group or change current group
You can set group of selected subsets with shortcut `Ctrl + G`.
![tools_loader_41](assets/tools/tools_loader_41-small.png)
:::warning
You'll set the group in Avalon database so your changes will take effect for all users.
:::
Work in progress...
## Library Loader
Library loader is extended [loader](#loader) which allows to load published subsets from Library projects. Controls are same but library loader has extra Combo Box which allows you to choose project you want to load from.
<div class="row markdown">
<div class="col col--6 markdown">
![tools_library_1](assets/tools/tools_library_1-small.png)
</div>
<div class="col col--6 markdown">
![tools_library_2](assets/tools/tools_library_2-small.png)
</div>
</div>
* * *
## Publisher
> Use publish to share your work with others. It collects, validates and exports the data in standardized way.
### Details
When you run pyblish, the UI is made of 2 main parts. On the left, you see all the items pyblish will be working with (called instances), and on the right a list of actions that are going to process these items.
Even though every task type has some pre-defined settings of what should be collected from the scene and what items will be published by default. You can technically publish any output type from any task type.
Each item is passed through multiple plugins, each doing a small piece of work. These are organized into 4 areas and run in sequence.
### Using Pyblish
In the best case scenario, you open pyblish from the Avalon menu, press play, wait for it to finish, and youre done.
These are the steps in detail, for cases, where the default settings dont work for you or you know that the task youre working on, requires a different treatment.
#### Collect
Finds all the important data in the scene and makes it ready for publishing
#### Validate
Each validator makes sure your output complies to one particular condition. This could be anything from naming conventions, scene setting, to plugin usage. An item can only be published if all validators pass.
#### Extract
Extractor takes the item and saves it to the disk. Usually to temporary location. Each extractor represents one file format and there can be multiple file formats exported for each item.
#### Integrate
Integrator takes the extracted files, categorizes and moves them to a correct location on the disk or on the server.
* * *
## Inventory
With Scene Inventory, you can browse, update and change subsets loaded with [Loader](#loader) into your scene or script.
:::note
You should first understand [Key concepts](artist_concepts#) to understand how you can use this tool.
:::
### Details
<!-- This part may be in Maya description? -->
Once a subset is loaded, it turns into a container within a scene. This containerization allows us to have a good overview of everything in the scene, but also makes it possible to change versions, notify user if something is outdated, replace one asset for another, etc.
<!-- END HERE -->
The scene manager has a simple GUI focused on efficiency. You can see everything that has been previously loaded into the scene, how many time it's been loaded, what version and a lot of other information. Loaded assets are grouped by their asset name, subset name and representation. This grouping gives ability to apply changes for all instances of the loaded asset *(e.g. when __tree__ is loaded 20 times you can easily update version for all of them)*.
![tools_scene_inventory_10](assets/tools/tools_scene_inventory_10-small.png)
To interact with any container, you need to right click it and you'll see a drop down with possible actions. The key actions for production are already implemented, but more will be added over time.
![tools_scene_inventory_20](assets/tools/tools_scene_inventory_20.png)
### Usage
#### Change version
You can change versions of loaded subsets with scene inventory tool. Version of loaded assets is colored to red when newer version is available.
![tools_scene_inventory_40](assets/tools/tools_scene_inventory_40.png)
##### Update to the latest version
Select containers or subsets you want to update, right-click selection and press `Update to latest`.
##### Change to specific version
Select containers or subsets you want to change, right-click selection, press `Set version`, select from dropdown version you want change to and press `OK` button to confirm.
![tools_scene_inventory_30](assets/tools/tools_scene_inventory_30.png)
#### Switch Asset
It's tool in Scene inventory tool that gives ability to switch asset, subset and representation of loaded assets.
![tools_scene_inventory_50](assets/tools/tools_scene_inventory_50.png)
Because loaded asset is in fact representation of version published in asset's subset it is possible to switch each of this part *(representation, version, subset and asset)*, but with limitations. Limitations are obvious as you can imagine when you have loaded `.ma` representation of `modelMain` subset from `car` asset it is not possible to switch subset to `modelHD` and keep same representation if `modelHD` does not have published `.ma` representation. It is possible to switch multiple loaded assets at once that makes this tool very powerful helper if all published assets contain same subsets and representations.
Switch tool won't let you cross the border of limitations and inform you when you have to specify more if impossible combination occurs *(It is also possible that there will be no possible combination for selected assets)*. Border is colored to red and confirm button is not enabled when specification is required.
![tools_scene_inventory_55](assets/tools/tools_scene_inventory_55.png)
Possible switches:
- switch **representation** (`.ma` to `.abc`, `.exr` to `.dpx`, etc.)
- switch **subset** (`modelMain` to `modelHD`, etc.)
- `AND` keep same **representation** *(with limitations)*
- `AND` switch **representation** *(with limitations)*
- switch **asset** (`oak` to `elm`, etc.)
- `AND` keep same **subset** and **representation** *(with limitations)*
- `AND` keep same **subset** and switch **representation** *(with limitations)*
- `AND` switch **subset** and keep same **representation** *(with limitations)*
- `AND` switch **subset** and **representation** *(with limitations)*
We added one more switch layer above subset for LOD (Level Of Depth). That requires to have published subsets with name ending with **"_LOD{number}"** where number represents level (e.g. modelMain_LOD1). Has the same limitations as mentioned above. This is handy when you want to change only subset but keep same LOD or keep same subset but change LOD for multiple assets. This option is hidden if you didn't select subset that have published subset with LODs.
![tools_scene_inventory_54](assets/tools/tools_scene_inventory_54.png)
### Filtering
#### Filter by name
There is a search bar on the top for cases when you have a complex scene with many assets and need to find a specific one.
<div class="row markdown">
<div class="col col--6 markdown">
![tools_scene_inventory_60](assets/tools/tools_scene_inventory_60-small.png)
</div>
<div class="col col--6 markdown">
![tools_scene_inventory_61](assets/tools/tools_scene_inventory_61-small.png)
</div>
</div>
#### Filter with Cherry-pick selection
<div class="row markdown">
<div class="col col--6 markdown">
To keep only selected subsets right-click selection and press `Cherry-Pick (Hierarchy)` *(Border of subset list change to **orange** color when Cherry-pick filtering is set so you know filter is applied).*
</div>
<div class="col col--6 markdown">
![tools_scene_inventory_62-small](assets/tools/tools_scene_inventory_62-small.png)
</div>
</div>
<div class="row markdown">
<div class="col col--6 markdown">
To return to original state right-click anywhere in subsets list and press `Back to Full-View`.
</div>
<div class="col col--6 markdown">
![tools_scene_inventory_63-small](assets/tools/tools_scene_inventory_63-small.png)
</div>
</div>
:::tip
You can Cherry-pick from Cherry-picked subsets.
:::
* * *
## Workfiles
Save new working scenes or scripts, or open the ones you previously worked on.
### Details
Instead of digging through your software native file browser, you can simply open the workfiles app and see all the files for the asset or shot you're currently working with. The app takes care of all the naming and the location of your work files.
When saving a scene you can also add a comment. It is completely up to you how you use this, however we recommend using it for subversion within your current working version.
Let's say that the last version of the comp you published was v003 and now you're working on the file prj_sh010_compositing_v004.nk if you want to keep snapshots of your work, but not iterate on the main version because the supervisor is expecting next publish to be v004, you can use the comment to do this, so you can save the file under the name prj_sh010_compositing_v004_001 , prj_sh010_compositing_v004_002. the main version is automatically iterated every time you publish something.
### Usage
<div class="row markdown">
<div class="col col--6 markdown">
#### To open existing file:
1. Open Workfiles tool from pipeline menu
2. Select file from list - the latest version is the highest *(descendent ordering)*
3. Press `Open` button
</div>
<div class="col col--6 markdown">
![workfiles_1](assets/workfiles_1.png)
</div>
</div>
#### To save new workfile
1. Open Workfiles tool from pipeline menu
2. Press `Save As` button
3. You can add optional comment to the filename, that will be appended at the end
4. Press `OK`
:::note
You can manually override the workfile version by unticking next available version and using the version menu to choose your own.
:::
## Look Assigner
> The Look Manager takes care of assigning published looks to the correct model in the scene.
### Details
When a look is published it also stores the information about what shading networks need to be assigned to which models, but it also stores all the render attributes on the mesh necessary for a successful render.
### Usage
Look Assigner has GUI is made of two parts. On the left you will see the list of all the available models in the scene and on the right side, all the looks that can be associate with them. To assign a look to a model you just need to:
1. Click on "load all subsets"
2. Choose a subset from the menu on the left
3. Right click on a look from the list on the right
4. Choose "Assign"
At this point you should have a model with all it's shaders applied correctly. The tool automatically loads the latest look available.

View file

@ -0,0 +1,23 @@
---
id: artist_work
title: Working on tasks
sidebar_label: Working
---
Check the [documentation](https://docusaurus.io) for how to use Docusaurus.
## Lorem
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus elementum massa eget nulla aliquet sagittis. Proin odio tortor, vulputate ut odio in, ultrices ultricies augue. Cras ornare ultrices lorem malesuada iaculis. Etiam sit amet libero tempor, pulvinar mauris sed, sollicitudin sapien.
## Mauris In Code
Mauris vestibulum ullamcorper nibh, ut semper purus pulvinar ut. Donec volutpat orci sit amet mauris malesuada, non pulvinar augue aliquam. Vestibulum ultricies at urna ut suscipit. Morbi iaculis, erat at imperdiet semper, ipsum nulla sodales erat, eget tincidunt justo dui quis justo. Pellentesque dictum bibendum diam at aliquet. Sed pulvinar, dolor quis finibus ornare, eros odio facilisis erat, eu rhoncus nunc dui sed ex. Nunc gravida dui massa, sed ornare arcu tincidunt sit amet. Maecenas efficitur sapien neque, a laoreet libero feugiat ut.
## Nulla
Nulla facilisi. Maecenas sodales nec purus eget posuere. Sed sapien quam, pretium a risus in, porttitor dapibus erat. Sed sit amet fringilla ipsum, eget iaculis augue. Integer sollicitudin tortor quis ultricies aliquam. Suspendisse fringilla nunc in tellus cursus, at placerat tellus scelerisque. Sed tempus elit a sollicitudin rhoncus. Nulla facilisi. Morbi nec dolor dolor. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Cras et aliquet lectus. Pellentesque sit amet eros nisi. Quisque ac sapien in sapien congue accumsan. Nullam in posuere ante. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Proin lacinia leo a nibh fringilla pharetra.
## Orci
Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Proin venenatis lectus dui, vel ultrices ante bibendum hendrerit. Aenean egestas feugiat dui id hendrerit. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Curabitur in tellus laoreet, eleifend nunc id, viverra leo. Proin vulputate non dolor vel vulputate. Curabitur pretium lobortis felis, sit amet finibus lorem suscipit ut. Sed non mollis risus. Duis sagittis, mi in euismod tincidunt, nunc mauris vestibulum urna, at euismod est elit quis erat. Phasellus accumsan vitae neque eu placerat. In elementum arcu nec tellus imperdiet, eget maximus nulla sodales. Curabitur eu sapien eget nisl sodales fermentum.

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 981 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 323 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 966 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 586 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 984 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 984 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Some files were not shown because too many files have changed in this diff Show more