From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

DerivedMesh refactoring

The idea behind derivedmesh refactoring, which started this summer and was still ongoing, was to untangle the derivedmesh drawing code into a just few, easy to understand functions, and to support the future system of blender viewport.

Derivedmesh drawing is currently split into many methods. We have solid, textured and material drawing methods with "mapped" variations of those drawing methods.

Some terminology explanation:

"Mapped" variants of the methods, draw derivedmesh faces by checking various face parameters that correspond to the original mesh face. This usually happens during selection drawing, since selection flags are stored on the mesh, not derivedmesh, and when blender needs to display the final derivedmesh in edit mode (cage option turned on for instance). All these need the system to lookup data of the original face, thus we use mapped drawing. The important thing to remember here is that mapped drawing mostly concerns UI feedback (drawing a selection overlay on top of the mesh, for instance), and for editmeshes it also handles regular mesh drawing. This makes sense, since in that case, the two meshes can be the same.

"Solid" mode drawing is the solid mode we all know with three lights, mesh normals, and simple material diffuse/specular color support

"Textured" mode basically encompasses a multitude of drawing modes, from regular textured drawing, to vertex color drawing, sometimes basic material drawing on top etc, etc.

"Material and GLSL" drawing are basically our GLSL mode and it's supports display of our blender internal materials.


By far the biggest disadvantage of the current drawing in blender is the requirement that we check face properties during drawing by iterating all faces of the derivedmesh. This is not only bad for performance, but it also makes code harder to understand and quite fragile to changes.

Those face checks are done using callbacks and maintaining the current functionality of blender while refactoring those checks out can be a headache. But in fact most of those checks can be split down to just a few properties:

  • Check if polygon is hidden
  • Check if material has changed
  • Check if texture has changed
  • Check if polygon is selected (mostly used in mapped variants of drawing code)

The simplifying assumption here is that most of those properties will not change while during drawing so we can either pre-sort triangle indices according to those properties or use an extra layer of vertex data to represent the property. The ideas are explained in [1].

Current code already supports sorting according to materials and hiding flags. This is done by count sorting polygons which is a 2*n operation. This is a bit expensive to do but remember that current blender code does n complexity checks per frame. So it could be acceptable to recreate the triangle index buffer when a hiding flag is changed, instead of checking every polygon per-frame.

The biggest problem is that if we combine the checks that we have to do at index buffer creation time, then the number of chunks becomes huge:

number of materials * number of textures * 2 (for hidden and not hidden) * 2 (for selection or no selection).

For that reason, it may make sense to use an extra data layer instead to represent one of those properties. This makes sorting unnecessary for that property but increases the memory requiremens. In my code, I was considering using a color data layer for the selection property.

Sorting according to textures during count sorting for index buffers can be done if we use a hash to count available textures for a material. This will increase the sorting cost accordingly of course, so it would be nice to decide if we really - really want to keep textured drawing for the new viewport system. We probably don't, so someone should make the difficult decision to drop textured drawing out of the window. Even if we do though, it should be easy to incorporate into the sorting system.

So, the final pipeline for derivedmesh drawing becomes clear:

  • Pass sort flag in derivedmesh draw function to pre-sort polygons to chunks according to criteria (hidden flag, material, texture).
  • Pass data flag for the data we need according to the drawing we want to make (tangents, normals, vcolors etc, etc). Each derivedmesh type must implement dedicated methods to upload those data to vertex buffers, as well as sort triangles according to the sort criteria.
  • Draw the sorted chunks with one draw call, similarly to how it's explained in [1]. Completely avoid iterating through polygons during drawing (of course needs iteration and reupload when data get invalidated).

This should allow future code to be much more robust and decoupled:

  • Requesting new data types is done by implementing a new derivedmesh upload function for that type
  • Requesting new sort types can be done in the same manner and only involves tweaking the triangle sort functions of the derivedmeshes (the system can probably be abstracted to allow complex combinations too).
  • In theory just one or two functions (accounting for the mapped variant) to draw anything. Draw function takes two parameters: Data request flags and sort request flags.
  • This can be used to do anything, request upload of debug data, do custom shaders with those data, etc, etc.


Future viewport ideas

If we have the above system implemented, then we can use a generic node based system to display data in any configuration we want. Inputs of node trees determine the data requests to the meshes, and the node tree itself deternines the operations. This can easily be coupled with GLSL material drawing in any form: PBR, BI, even cycles. We just add the material data to the data requests we have for the node tree. A shader mixes between the two, and that's it. It sounds simple but it's never that simple of course. I just wanted to give the very high level idea here.

Pie Menus

Pie menus are far from finished. Current todos are:

  • Support for more than 8 items

Ideas here are to offset more items to another pie menu, accessible through an autogenerated "More" pie button, or simply make the new pie menu accessible by some shortcut (maybe scroll wheel). The second idea allows people to create pie layers by using sequencial layout.pie() items in their scripts, then users can scroll between them with nice animation too.

  • Support for custom placement of pie items

This should be straightforward, though some care needs to be taken on how to implement automatic enum variable expansion. What might work is adding pie placement flags to enum items.

[1] http://code.blender.org/2015/06/optimizing-blenders-real-time-mesh-drawing-part-1/