Realtime Compositor UX
This document presents the proposed user experience and user interface for the realtime compositor and explains the rationale behind the proposal. This is an initial document that should be iterated on with feedback from artists and other stakeholders.
To better define the expected user experience, we should first state the objectives behind the realtime compositor. For an initial release, the objectives are as follows:
- The user must be able to apply the scene compositor node tree on the output of the render engine in the viewport without the need to render the scene first.
- The user must be able to interactively tune, change, and extend the compositor node tree in a real time manner, seeing the result of the changes directly in the viewport.
- The real time compositor is meant as a faster alternative to the existing compositor for iterating on the user’s scene without the need to re-render every time.
A workflow currently exists in Blender to somewhat achieve the aforementioned objectives, and we shall mention it to build upon and compare it to the proposed workflow. The existing workflow is as follows:
- The user renders at a lower resolution with a relatively lower quality than the final intended resolution and quality to make re-renders and iterations faster.
- When constructing the compositor node tree, the user typically uses lower quality settings for slow nodes like the glare node for faster editing of the compositor node tree.
- The user re-renders upon adjusting the scene to see the final compositing result.
- Once the user is satisfied, the final resolution and quality settings are set, the quality of the compositor nodes are set to their final value, and a final render is rendered.
Realtime Compositor Workflow
The proposed workflow that utilizes the real time compositor in the viewport is as follows, with analogies to the existing workflow:
- The user uses viewport rendering, which typically happens at a lower resolution and quality than the final render. Moreover, the rendering is progressive and interactive in terms of iterating on the scene. This is analogous to point 1 in the existing workflow and serves the same purpose.
- The user enables the real time compositor for the viewport render and constructs the compositor node tree using lower quality settings for nodes for faster tuning. This is analogous to point 2 in the existing workflow and serves the same purpose.
- Once the user is satisfied, the quality of the compositor nodes are set to their final value, and a final render is rendered. This is identical to point 4 in the existing workflow.
So the main advantage of the new workflow is that it skips step 3 of the existing workflow. Moreover, it is significantly faster due to the real time requirements.
It should be noted that the final render would still use the existing compositor in both workflows. However, the plan is to eventually unify both compositors and allow the final render to composite and compute the backdrop using the real time compositor.
There are some concerns about the real time compositor workflow that should be noted and handled. Each of the following sections describes one of those concerns.
Since the viewport resolution will likely be different than the final render resolution, the result of compositing in both cases might be different. For instance, overlaying an image positioned in absolute pixel space in one of the corners of the render will have a different position if the resolution doubled in the final render. However, this is no different than the existing workflow, and the solution is for the user to do any transformations in relative pixel space instead.
Aspect Ratio Invariance
This is similar to the resolution invariance concern, except it is about the aspect ratio of the render. Since the viewport can have any aspect ratio that does not necessarily correspond to the aspect ratio of the final render. The solution is two folds. First, the user should use aspect ratio correction in relevant nodes. Second, the user should use the camera view with an opaque passepartout or an appropriate border render to maintain an aspect ratio that is identical to the final render.
Some operations are intrinsically slow to compute, including convolutions, morphological operators, and other complex operations. So the concern is that those operations would not be done in real time, and thus would defy the point of the real time compositor. However, many variants of those operations can be done almost in real time, for instance, when the structuring element of the morphological operator is small, or when the blur radius is small. So it is still worthwhile to support all operations and expect the user to use them responsibly or only use them at the very end when doing the final render.
Alternatives To Slow Operations
Users are sometimes forced to utilize slow operations to achieve results that can be computed through much faster and cheaper alternatives. While this wasn’t much of a concern for the existing compositor, the real time requirement of the real time compositor means that those cases can’t be ignored.
For instance, consider the vignette use case. The most straightforward way of doing vignette in Blender has been to blur an ellipse mask and mix it with the image. While blurring can be fast enough on the GPU, it can be completely avoided by computing the vignette procedurally. One could use a texture node with a spherical blend texture, but those textures are clamped, inflexible, and not GPU accelerated. So there currently aren’t any good options.
The solution to extend the existing compositor by adding nodes that allow users to do those kinds of things procedurally. For instance, add a node that takes an image and returns the local, global, and normalized coordinates of the pixels of the image as well as its width and height, then the user will be able to use those outputs to compute whatever procedural gradients one needs to do a vignette effect. Moreover, shading texture nodes should be ported to be used directly instead of the legacy texture system.
This applies to other operations like Fog Glare, which is implemented as a convolution, while another faster method could yield a similar result at a fraction of the execution time. So it would make sense if faster variants of such operations were added for both the existing compositor and the real time compositor.
Multiple Output Nodes
The compositor has three nodes that can be used as output nodes. The Composite, Viewer, and Split Viewer nodes. Which of those nodes should be considered the output of the real time compositor? Currently, we have a priority for each of them, where if there is an active Composite node, it will be used, otherwise, if there is an active Viewer node, it will be used, otherwise, if there is an active Split Viewer node, it will be used, otherwise, the compositor will not run. But such priority would make adding viewer nodes for inspecting intermediate results a bit inconvenient as we don’t have a separate result for both, does this use case make sense for the real time compositor?
Disable Final Render Compositor
Currently, if one uses the real time compositor, the final render compositor will run even if it will be doing nothing, as it responds to any changes in the node tree. How do we allow the user to only run the real time compositor? One solution is to utilize reusable compositor node trees proposed in a later section, in which case, the user can have a compositor node tree active in the compositor editor without it being used in the render pipeline.
The user can specify a border for rendering, in which case, one would expect the compositing to be done only in the border region. However, this is ambiguous as handling border renders is render engine dependent. For instance, while Cycles only render the border region, EEVEE and workbench don’t and just render the full viewport. Moreover, if the domain of the result of the compositor was larger than the border, should it bleed outside of it or should the display window be clamped to the border? Since EEVEE-next will support border rendering, the compositor can just always only process the border render.
Since the compositor is still under development and will likely still be when it gets merged to master, some of the nodes will still be unsupported. In this case, how should the user be notified of this and what should the node output be? In this case, we have three options:
- Unsupported nodes just return zeros. This is the current behavior.
- Display an error that the node is unsupported and disable the compositor.
- Pass the inputs through as if the node was muted, which may not be possible for all nodes.
The user interface of the viewport compositor is straightforward for now. The shading settings panel in the 3D view would include a checkbox for utilizing the scene compositor node tree for real time compositing. If the checkbox is enabled, the viewport pass enum will be hidden and any options for the real time compositor would appear. This checkbox can be a node tree selector in the future, see the following section on reusable compositor node trees.
The following sections describe proposals that are not necessary to look into at this stage, but are good to consider.
Reusable Compositor Node Tree
Currently, compositor node trees exist only as part of scenes, where a scene may or may not have a compositor node tree in use. This worked well before because the compositor could only execute as part of the rendering pipeline when rendering a scene. However, now that we want to use the compositor in multiple places and pipelines, it would make sense if the compositor node tree was a reusable ID. In that case, the blend file could contain any number of compositor node trees, which can be assigned to the rendering pipeline, the real time compositor, a VSE strip, and so on. Moreover, compositor node trees can be appended from files and stored as assets for easier reuse.