From BlenderWiki

Jump to: navigation, search

Blender compositor project proposal 2014 (draft)

Project brief

Addressing the next issues :

  • Performance:
    • The buffers inside of the tile compositor are always 4 floats per pixel. Even value types are stored in 4 floats. The initial idea (2010) is that these buffers are shared faster between CPU and GPU devices. When doing large composites (like 4k) much memory is needed.
    • The compositor works on 'Operations'. Operations are reusable commands. The work that is being scheduled to a CPU/GPU is determined brute force. Every time the system is evaluated to find a next tile that can be calculated.
    • The tile compositor is fast, but has no caching. Caching was left out intentionally during the first implementation and marked as a future project
  • Relative nodes & dimensions
    • Support for better sampler algorithms
    • Nodes give different result based on the resolution of the image the compositor is working on. Many time is lost adjusting node setups for the new dimension.
  • Deep Compositing & Canvas compositing
    • When compositing with many images in different dimensions and alpha values, a complex node setup is needed. changes in the node setup are also time consuming.
  • Node editor workflow

Phase 1: performance

P1.1: Multiple buffer types

Introducing multiple buffer types in the compositor (COLOR, VECTOR, VALUE). This will lead to less memory usage.

Technical design Current state waiting for merge with trunk (Blender 2.72)

P1.2: Tile

Introducing Tiles as an object. A tile object contains:

  • operation that can calculate the tile
  • the area on the image where the tile is located (x1, y1, x2, y2)
  • the execution state of the tile (NOT SCHEDULED, SCHEDULED, FINISHED)
  • a list tiles that needs to be completed, before the current tile can be calculated
  • a list of tiles that depends on the current tile.

The consequence of the tile object is that:

  • WorkPackage class will be removed. Tiles is the work that can be scheduled.
  • Less evaluation needs to be done in the system, as all dependencies are kept in memory.
  • Better ETA determination. The current implementation uses only user visible tiles to determine the progres. In the new situation the system know precisely the amount of work that can be done
  • Tiles can have different dimensions. Operations that work better on whole images are not forced to work on parts of images. This will remove thread locks during tile calculation. The proposed solution would only use thread locks for scheduling.

Technical design

P1.3: Keep buffers between user actions [only during editing]

The buffers and tiles are kept in memory when the compositor is finished executing. When the compositor is used again, the buffers and tiles will be loaded and checked if they have been changed. If they have been changed or the depend tiles are changed the tiles will be marked (NOT_SCHEDULED). When the compositor is called during rendering, the buffers will not be used. The memory will also be cleared when new files/scenes will be loaded. P1.1 and P1.2 need to be completed before we can implement this.

P1.4: Redesign and implement the VectorBlur [Optional]

The current VectorBlur node uses a method that has been implemented for the Blender Internal renderer. This implementation is not threadable/tilable and can take a long time. We should reimplement the VectorBlur node as it is the node that slows down the system the most.

Technical design

Phase 2: separate composite dimension from render dimension

P2.1: PixelSamplers

We currently support some basic pixel samplers (nearest, linear, 3, 4). To increase the quality we would like to introduce new pixel samplers. Pixel samplers that use the pixelsize. These more advanced pixel samplers are needed during scaling.

P2.2: Composite resolution

User will have the option to set a resolution for the compositor and a samepler. Render layers and images are scaled to match the chosen resolution. The compositor internally will always work on the compositor resolution. Changing the render percentage will not influence the compositor resolution. Viewer/SplitViewer nodes will always output using the compositor resolution. FileOutput and CompositeOutput will scale down to the RenderResolution. Scale node needs to be adjusted for this as well. The overall speed of the compositor can be influenced by the existing quality setting.

Phase 3: Canvas and alpha compositing

P3.1: Basic Deep Compositing

To solve the alpha issues when blending images we would like to introduce basic deep compositing. The next deep composite nodes will be implemented:

  • ConvertColorToDeepNode
    • Input
      • Color
      • Z
    • Output
      • DeepColor
    • Process
      • Stored the color and a the z value in a deep color data type
  • ConvertDeepToColorNode
    • Input
      • DeepColor
      • AlphaLimit
    • Output
      • Color
      • Z
    • Process
      • Processes the deep color data types and result in the alpha blended color or the first depth that is not alpha transparent (AlphaLimit will be used to determine at what alpha values the depth should be returned).
  • MergeDeepColor
    • Input
      • Multiple DeepColor
    • Output
      • DeepColor
    • Process
      • Merge multiple DeepColor types to a single DeepColor type

The Deep compositing is still basic as it does not support deep color buffers, deep images (OpenEXR), DeepBlurs, etc.

P3.2: Canvas compositing

Canvas compositing has nothing to do with the compositor engine. All features in the engine that are needed are already present. As the canvas compositing technical looks like the Mask editor we would suggest Sergey to work on this subject. (Needs still to be discussed)

Phase 4: Node editor

During discussions with compositor users we get a lot of feedback that really concerns the node editor. As our scope in only a part of the whole node editor we propose to set Lukas to design and implement the needed things. (Still needs to be discussed)