From BlenderWiki

Jump to: navigation, search

Implementation Notes

Light Interactions

The light interactions could be named in the UI and source code as to Light/Environment/Indirect x Diffuse/Specular/Transmission. This gives an overview of what is currently in Blender:

From BxDF Implemented Note
Light Diffuse Yes Wrong falloff
Light Specular Yes Wrong falloff
Light Transmission No
Environment Diffuse Yes (Color) AO; BRDF not taken into account
Environment Specular Part Only with Mirror, fade to sky
Environment Transmission Yes Ray transparency; no shadows, no BTDF
Indirect Diffuse No Radiosity was removed
Indirect Specular Yes Mirror; separate, limited phong BRDF
Indirect Transmission Yes Ray transparency; limited phong BTDF

Interfaces

The source code will be re-organized, separating components. For example currently lamp and material code is mixed, and for some BRDF/lamp combinations special exceptions exists. These kinds of things should be removed such that adding e.g. a new BXDF or Lamp should be a matter of changing things in one place and interaction with other elements is done through an abstract interface.

Some of the interfaces will be: Lamp, Material, Surface Shader, BXDF, Environment, Camera, .. this is all fairly standard but still needs to worked out better.

Derivatives

Handling of derivatives in the render engine is shaky, for example ray tracing does not support them, and environment lighting still assumes xy derivatives in image space which doesn't work correct for things like AO. Also for example the texture derivatives code for bump mapping could benefit from a more generic system to retrieve derivatives. It would be good to clean this up, but it's also potentially a time sink.

Node System

Shading nodes will be able to pass along both colors/values and BXDFs. A BXDF would be defined as:

typedef struct BXDF {
    void (*eval)(..);
    void (*sample)(..);
} BXDF;

These function points we then correspond to functions defined by the node.

Node inputs should be able to know if:

  • The input is a fixed value (can be importance sampled well)
  • The input is a dynamically changing value (unknown for importance sampling)

Rules

  • BXDFs can be converted to colors only using a light node.
  • Colors/values can be converted implicitly to BXDFs.
  • However some nodes to compute such colors will prohibit conversion to a BXDF, in particular light nodes or an AO node.
  • In some circumstances, certain nodes will be disabled automatically. Specular BXDF nodes will be disabled when evaluating the BXDF for diffuse only, in which case e.g. a layer node may find one of it's inputs not available.

Node Tree Evaluation

The node tree can be evaluated just like happens now, in forward order. The difference is that there would be two callback functions for nodes, one for importance sampling and another for evaluating the BXDF. Node inputs/outputs that take a BXDF, would in one case pass along colors, and in the other an vector and pdf.

However, lazy evaluation would be better for efficiency. If nodes are disabled, or if importance sampling a blending node only samples one of the two inputs, or if nodes are used for smooth transitioning between materials, this helps evaluating only the nodes that are actually relevant. Lazy evaluation requires the nodes to be evaluated backwards starting from the output. Evaluating the same node twice can be avoided by tagging it as having been executed and then simply using it's output again.

This could be hidden by functions that get the input values. For example:

node_get_input_color(node, REFLECTIVITY, reflectivity);
node_get_input_sample(node, REFLECTIVITY, out, &pdf);

Example

http://mke3.net/blender/devel/rendering/s/layer_bxdf_nodes.png

Other Ideas

An idea was to use a reflection coordinate system (normal and two tangents) for shading, because it simplifies computations. I'm not really a proponent of this because then we have to think about different coordinate spaces, whereas now it is all in camera space.

Issues

Extracting Passes from BXDF

Render passes should be automatically extracted from a BXDF. Not only are these needed for pass output, but various rendering features depend on them as well:

  • ZTransp needs alpha
  • Transparent/colored shadows need RGBA
  • SSS needs diffuse
  • Photon mapping needs diffuse and specular separate

This can probably be achieved by evaluating the BXDF node tree with each node getting info on which pass it is being evaluated for. That way for example a diffuse node can work as normal for the diffuse/combined passes, pass on the color without shader for the RGBA pass, and do nothing in other cases.

Evaluating Passes

We can do pass evaluation in two ways, either evaluate all passes in one go, where a node will fill in all passes when it is called, or where a node fills in just one pass and the node tree is evaluated multiple times.

The former method is clearly more efficient when using many passes, while the latter is more efficient when evaluating only a combined pass, and is also simpler in implementation.

SSS and Volume Multiple Scattering

How these features fit into the system is not entirely clear. In a full path tracing systems these can be formulated as a BXDF and raytraced, however we should also still support them as a preprocessing pass for faster rendering later. One way to do this is outside the BXDF. However it would be convenient if making skin shader with multiple layers of SSS could still be done with a BXDF. It's not clear to me how to make that work though, since the lights will already be baked into the SSS point cloud, so that actually overrides light interaction, but what if you then mix it with e.g. a specular shader?