From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

A frequent request by users is to optimize the speed of drawing. This document describes the implementation of GPU based texture painting.

Discussion

Texture painting is one of the strong points of blender, however the current architeture of the system is somehow bound by the computational complexity of the algorithm.

Currently, the algorithm works as follows:

  • On paint stroke beginning, calculate the screen space positions of the vertices
  • For each paint step, find triangles inside the brush and for each pixel inside the triangle
    • calculate barycentric coordinates
    • Find UV coordinate/texture affected
    • Send appropriate number of pixels to threaded function that copies brush pixels to image

Porting the algorithm to GPU is a good opportunity to optimize a few of these steps:

  • UV space barycentric calculation.
  • Pixels are generated automatically from the triangle rasterizer
  • free brush texture filtering
  • GPU does all that with a lot more parallelization


Details

The basic idea of the implementation is very simple but requires recent hardware to run:

We bind a number of textures that the mesh is using in the currently bound MTex layer to an equal number of render targets. Usually the hardware is limited to 4 so we may have to do multiple passes (can be optimized with transform feedback). The render target index is passed per face by using a flat vertex attribute.

Then, we transform the mesh using the regular openGL pipeline and in the vertex shader we check the position of the vertex against the brush position and radius in screen space (an easy CPU calculation passed as a uniform to the shader). We need to know if the whole triangle is outside the brush radius to avoid sending pixels to the render targets. To do that, we can pass for each vertex the coordinates of the other two triangle vertices as attributes and calculate edge/point distance for each edge of the triangle. For systems supporting geometry shaders, this can be optimized away by doing the calculation there. For triangles that won't pass the test, we simply either collapse the vertices, generating no fragments, or send the triangle outside the clip space. For triangles that do pass the test, we do the following: Set the homogenous coordinate to 1.0 (important so that distortion can be avoided!) and set the x,y coordinates to the uv coordinates of the vertex.

In the pixel shader, we write to the appropriate vertex buffer, using a sampler for the brush texture if needed and probably a prepass for clone style brushes.

dealing with uvs outside the 0-1 range

UVs outside the 0-1 range will simply fail due to a very simple reason: The triangle will be clipped at the framebuffer borders. To address this issue we need to preprocess the mesh and retesselate triangles whose uvs are outside the 0...1 range appropriately. This can be done as a preprocessing step when entering texture paint mode

dealing with seam bleeding

Detecting seams is done with the current implementation as well. An easy way to add seam bleeding is to offset the output of the vertex shader if the vertex belongs to a seam uv which is quite easy if other vertex coordinates are passed as attributes.