From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

Stream Based Drawing Design

Problem

Currently blender uses quite a lot of drawing paths. Given the availability of modern hardware, we should strive to simplify the system, get rid of deprecated draw methods and exploit better methods to send data to GPU for drawing.

Proposal

Move to an on-demand stream-based system that generates streams from derivedmeshes. Different drawing modes and tools require different data streams to function properly. Using a stream based design ensures we have only the data we need, when we need it, in a unified way.

Definitions/Considerations

Stream
Is a data array sent to the GPU. A single stream consists of an array of a single per-vertex attribute, such as normals, vertex positions, or uvs

Indexed Drawing
Is done by using glDrawElements and is much faster than regular glDrawArrays because

  • We avoid duplicating data sent to GPU and
  • GPUs employ vertex caches that store results of vertex calculations for the same element

Welder
Is a function that detects when we have multiple attributes per vertex and if we do, allocates the separate attributes to separate vertices with duplicate coordinates. This may be needed because OpenGL assigns one value per attribute per vertex only, while in blender, more than one value per attribute can be assigned to the same vertex. It would also account for different materials on the mesh.


When designing such a system, we need to consider

  • Stream availability.
    • Send data streams only when requested/needed by the drawing/tool mode
  • Stream generation.
    • Stream generation should be done once and reused as much as possible (given memory constraints)
  • Optimum data drawing.
    • If possible send data to GPU as indexed streams. This requires to write a welder to generate the indexed lists and running it prior to generating the stream buffers.
  • Stream Update.
    • Welders are expensive so possibly they should not run for every modifier update. So it is very probable that we will be forced to use the current glDrawArrays system for quite a few cases. However, it should be possible to only update stream vertex positions for displacement modifier only stacks. Also, objects without modifies can easily benefit from indexed drawing, since the welder would only run once: On exit from edit mode.
  • Graceful fail
    • Requested draw modes should be able to fall back on other modes or warn the user if the requested data is not available (ideally, this should be detected prior to requesting them)
  • PBVH drawing
    • Not very well versed on that but awaiting comments
  • GLSL
    • We need to support generic attributes in an easily requestable way by the GLSL material system

What all this amounts to basically, is a dependency system that tags GPU streams for re-evaluation when mesh data change, and GPU stream generation when a stream that is requested by the tool/draw mode combination is missing.


What we already have in blender/what needs implementation

Modifiers already have callbacks that do custom drawing, but given that the system has been written for immediate mode, it lacks consideration for custom generation of buffers. Modifiers should also take care of doing their own buffer generation when possible to do it in an optimal, non generic way, or if not, reuse one of the generic functions that either do welding (should be the way to go for displacement only modifier stacks), or regular glDrawArrays style drawing. Currently in blender, glDrawArrays has been implemented but it lacks the generate on request system. Rather, we use either immediate mode (if stream based generation is too complex, for instance for GLSL) or GPU arrays based on our draw mode. In essense, to get close to the proposed system, we need:

  • A welder and generic glDrawElements support
  • A centralized stream dependency system. When data changes on a mesh, we need to tag streams for update
  • Support for GLSL attribute streams.

GPU feature considerations

Solid vs Smooth Shading.
We may avoid doing a separate setup step for smooth shading if we use shaders to detect normals at screen-space, through differential built-in functions of GLSL. This requires recent hardware of course
Instancing
We can reuse buffers to do instancing using one of many OpenGL mechanisms from OpenGL 3.3+ If we do, we should be able to detect nested instancing mechanisms (such as, for instance array modifier + dupliverts) and modify the instanced calls appropriately.