Source/EEVEE & Viewport/GPU Module

= GPU Module Overview =

The GPU module is an abstraction layer between Blender and an Operating System Graphics Library layer (GL). These GLs are abstracted away in GPUBackends. There is a GLBackend that provides support to OpenGL 3.3 on Windows, Mac and Linux.

Currently (December 2021) the GPU module is in development to support other GPUBackends like Vulkan and MSL. This will impact the GLSL and how GPUShader parameters will be managed. When writing this page we left out some details that we know will change in the near future until these changes have been clarified and implemented.

GPU module can be used to draw geometry or perform computational tasks using a GPU. This overview is targeted to developers who want to have a quick start how they can use the GPU module to draw or compute. Basic knowledge of a GL (OpenGL core profile 3.3 or similar) is required as we use same concepts.

Drawing pipeline
This section gives an overview of the drawing pipeline of the GPU module.



Textures
Textures are used to hold pixel data. Textures can be 1, 2 or 3 dimensional, cubemap and an array of 2d textures/cubemaps. The internal storage of a texture (how the pixels are stored in memory on the GPU) can be set when creating a texture.

Frame buffer
A frame buffer is a group of textures you can render onto. These textures are arranged in a fixed set of slots. The first slot is reserved for a depth/stencil buffer. The other slots can be filled with regular textures, cube maps, or layer textures.

`GPU_framebuffer_ensure_config` is used to create/update a framebuffer.

Shader program
A GPUShader is a program that runs on the GPU. The program can have several stages depending on the its usage. When rendering geometry it should at least have a vertex and fragment stage, it can have an optional geometry stage. It is not recommended to use geometry stages as Apple doesn't have support for it.

The order of execution of stages have a fixed order. When drawing geometry, first the vertex stage is performed, then the geometry stage (when available), and then the fragment stage. The logic of these stages can be loaded with GLSL-code.

This will create a GPUShader load and compile the vertex and fragment stage and link the stages into a program that can be used on the GPU. It also generates a GPUShaderInterface that handles lookup to input parameters (attributes, uniforms, uniform buffers, textures and shader storage buffer objects).

Cross Compilation
Our target is to cross compile GLSL to OpenGL3/4 and Vulkan. To create shaders that can be cross compiled the GPUShaderCreateInfo should be used when creating shaders.

This mechanism is introduced in Blender 3.1 and we are currently in the process of migrating all internal shaders we expect that all internal shaders will be migrated in Blender 3.2.

See EEVEE_%26_Viewport/GPU_Module/GLSL Cross Compilation for more details.

Geometry
Geometry is defined by a `GPUPrimType`, one index buffer (IBO) and one or more vertex buffers (VBOs). The GPUPrimType defines how the index buffer should be interpreted.

Indices inside the index buffer define the order how to read elements from the vertex buffer(s). Vertex buffers are a table where each row contains the data of an element. When multiple vertex buffers are used they are considered to be different columns of the same table. This matches how GL backends organize geometry on GPUs.

Index buffers
Index buffers can be created by using a `GPUIndexBufferBuilder`

Vertex buffer
Vertex buffers contain data and attributes inside vertex buffers should match the attributes of the shader. Before a buffer can be created, the format of the buffer should be defined.

Create a vertex buffer with this format and allocate the elements.

Fill the buffer with the data.

Batch
Use GPUBatches to draw geometry. A GPUBatch combines the geometry with a shader and its parameters and has functions to perform a draw call. To perform a draw call the next steps should be taken.


 * 1) Construct its geometry.
 * 2) Construct a GPUBatch with the geometry.
 * 3) Attach a GPUShader to the GPUBatch with the `GPU_batch_set_shader` function or attach a built in shader using the `GPU_batch_program*` functions.
 * 4) Set the parameters of the GPUShader using the `GPU_batch_uniform*`/`GPU_batch_texture_bind` functions.
 * 5) Perform a `GPU_batch_draw*` function.

This will draw on the geometry on the active frame buffer using the shader and the loaded parameters.

Immediate mode and built in shaders
To ease development for drawing panels/UI buttons the GPU module provides an immediate mode. This is a wrapper on top of what is explained above, but in a more legacy opengl fashion.

Blender provides builtin shaders. This is widely used to draw the user interface. A shader can be activated by calling `immBindBuiltinProgram` This shader program needs a vertex buffer with a pos attribute, and a color can be set as uniform.

Fill the vertex buffer with the starting and ending position of the line to draw. By calling `immEnd` the data drawn on the GPU.

Compute pipeline
Next to drawing geometry on a texture you can also use the GPU module for computational tasks. Currently the compute pipeline should only be used after checking if the platform can handle compute tasks. There should always be a fallback implemented for the CPU if the platform doesn't support compute. We expect that in 2022 all platforms will support compute capabilities.

`GPU_compute_shader_support` can be called to check if the platform supports compute tasks.

A compute task is a variant of a GPUShader that only has a compute stage.

To activate the program the shader should be bound to the GPU device.

After the bind the parameters can be loaded with the `GPU_texture_(image_)bind`, `GPU_shader_uniform*`, functions.

The compute task can be called by the `GPU_compute_dispatch` function. In `source/blender/gpu/tests/gpu_shader_test.cc` there are several examples on how to use compute pipeline.

Debugging
Debugging on GPUs can be difficult as you cannot step through your code with a debugger. Tools like renderdoc help to detect what a call actually does on the GPU by recording the state before and after each call.

Starting blender with the `--debug-gpu` parameter blender will add more context to ease debugging.