Skip to content

Draw Manager: Image Engine

Since Blender 2.91 the draw manager is used for drawing the UV and Image editor. This page describes an overview of the implementation and future direction.

Overview

When drawing the UV/Image editor there are 2 draw engines active.

  • Image Engine: Responsible for drawing the image draw/engines/image/image_engine.c
  • Overlay Engine: Responsible for drawing the
    • background draw/engines/overlay/OVERLAY_background.c
    • checkerboard draw/engines/overlay/OVERLAY_background.c
    • unavailability grid draw/engines/overlay/OVERLAY_grid.c
    • UV overlay draw/engines/overlay/OVERLAY_edit_uv.c
    • Stretching overlay draw/engines/overlay/OVERLAY_edit_uv.c

Depth Buffer

The depth buffer is used to composite different layers to the user.

  • 1.0: The background of the image editor.
  • 0.75: Rendering of the actual image.
  • 0.35-0.25: Rendering of the UV edges; 0.25 are selected edges, 0.35 are unselected edges. This ensures that selected edges are always on top of unselected ones.
  • 0.25-0.15: Rendering of the UV vertices; 0.05 are selected vertices, 0.15 are unselected verictes. This makes sure that the selected vertices are on top of unselected ones.

Faces aren't drawn depth aware.

Overlays

Having a single overlay engine shared between the UV/Image editor and 3d viewport allows us to develop an overlay once and by configuring the vertex shader we are able to use it in both of the editors. In the future this can be extended to other overlays as well or even real-time baking.

Color management

The UV/Image editor reuses the same color management pipeline as the 3d viewport. See GPUViewport for more information (NOTE for early readers, the GPUViewport still needs to be documented).

Edges

Smooth edges

Smooth edges are drawn with a similar technique as edit edges in the 3d viewport.

  • edit_uv_edges_vert.glsl converts the vertices to the viewport
  • edit_uv_edges_geom.glsl extends the lines to 2 triangles
  • edit_uv_edges_frag.glsl determines the distance between the fragment and the line to be drawn. Based on the distance mixing and blending will be applied.

Huge textures

Currently images that don't fit on the GPU are resized to a power of 2 image that fits on the GPU. This gives incorrect feedback to the user as the actual image and the operations on the image happens in a the huge image, but to the user a smaller version is shown that has some blurry artifacts.

Future developments should add back support for huge images. The idea is to test if we can support sparse images.