Materials are still using the old code, but this is now cleanly separated to support plugging in a node based materials system. Using the material for a given shading points requires a call to setup things first:
mat_shading_begin: set up material for shading a given point, evaluating textures, bump mapping, and other things that do not depend on the light vector.
mat_shading_end: clean up
After that it is possible to evaluate and sample the bsdf, and evaluate the emission:
mat_bsdf_f: evaluate the bsdf with a given light vector
mat_bsdf_sample: sample a light vector and evaluate the bsdf. (Not implemented at the time of writing).
mat_emit: get the amount of light emitted in the view direction
Beyond that it is also possible to query some averaged values, which are useful for approximate algorithms that do not use the full bsdf:
mat_color: get average diffuse surface color, used by e.g. the color pass or transparent shadows.
mat_alpha: get average transparency, used by e.g. ztransp which ignores refraction effects
The material can also be used to evaluate displacement:
mat_displacement: get displacement vector at current point
Fitting in a New Shading System
A new node based shading system should fit well into this design, basically replacing the material material functions (although in practice more changes are needed).
mat_shading_begin the node tree code be halfway executed, up to any nodes that require light vectors. This would typically include all textures, so they would need to be executed only once. After that functions such as
mat_bsdf_sample could evaluate the rest of the nodes each time they are called. Nodes would probably also need custom code to implement
Before using a lamp to shade a surface, the following function is used to check if there is any influence at all (for example because of layers or zero power):
Next positions on the lamp can be sampled. If random numbers are provided the corresponding location on the lamp will be returned, otherwise the center will be used.
lamp_sample: returns lamp vector, influence and shadow. All three can be set to NULL to not compute them.
There are two other functions to query lamp shadow and influence in some places, but these should eventually be replaced by
There are three ways in which lamps are used to compute shading, with varying accuracy. Full shading is done using the following function (using new Multi Shade option for lamps):
shade_lamp_multi: sample locations on the lamp and for each use the material bsdf and do a shadow lookup.
However this requires many bsdf executions which may potentially be slow and/or noisy, an alternative is to use:
shade_lamp_multisample locations on the lamp for shadow lookups, but use the lamp center for shading. For area lamps the form factor is used to get more accurate influence.
In case we only want to do a single shadow lookup, for example shadow buffers already give anti-aliased shadows with one query, while ray shadows require more than one sample, the following function is used:
shade_lamp_singleuse lamp center for a single shadow lookup. For area lamps the form factor is used to get more accurate influence.
Previously it was physically impossible to do physically correct inverse square lamp falloff. There was a distance parameter which to control things. The problem is that correct lamp falloff gives very harsh light differences, which I think would be solved with proper tone mapping. However that's not there yet, and the distance parameter has been replaced with a "smooth" value, which if you increase it makes the falloff softer. Setting it to 0.0 gives the physically correct result and should work well combined with tone mapping.
The area lamp code has been changed to be physically correct and easier to control, without gamma or distance parameters. When no shadowing is done, it uses a fast form factor computation, which exactly matches full sampling with lambert material, inverse square falloff and smooth 0.0. This method does not fit particularly well, as it breaks the decoupling of materials/lights, but has the advantage that it is really quick and noise free.
Number of Shading Points per Pixel
Blender uses a trick to avoid doing shading for each anti-aliasing sample in a pixel. For samples that shade that same triangle, the averaged location in the triangle is used for shading. The Full Osa option can be used to disable this behavior.
This saves a lot of time, however it can lead to problems as the location is not accurate an may not actually be on the triangle. In particular for raytracing this leads to self intersections, to avoid it the ray starting location is varied between the antialiasing sample locations, instead of the averaged location.
Also with regards to the number of samples used this complicates things a bit, the code tries to keep the number of samples per pixel independent of the Full Osa option, so in this case it uses fewer samples for each shade.
The strand renderer is even more radical and will only shade strand curve control points, and interpolate between the shading over the strand.
Decoupling visibility tests from shading is an important optimization for render engines that do "fat" shading operations, which are expected to be already noise-free (as compared to typical raytracers that do many lighter operations which are averaged together).
It's an approximation and it can give subpixel shadow/light leaking, or aliasing of specular highlights, textures, etc. Further a current weak point is rendering very high poly meshes, which will revert to doing many shading operations again. A possible solution would be to merge samples with other triangles anyway if they are similar enough, or instead do something more micropolygon-like.
Integration over Lamps
The material BSDF is designed to return values in the range 0..1/pi, and includes the cosine term usually separated from it.
For integrating over the surface of lights in the scene, this is the standard formula:
integrate_over_areas(L*f*V*cos_x*cos_z*/dist_squared) L = light coming from lamp f = BRDF/BTDF (also in range 0 .. 1/pi for energy conservation) cos_x = dot product between surface normal and light vector cos_z = dot product between light normal and vector to surface V = visibility dist_squared = squared distance between surface and lamp points
In Blender this then corresponds to:
lamp_influence = L*cos_z/dist_squared lamp_shadow = V mat_bsdf = f*cos_x
For integrating over the hemisphere, the standard formula is:
integrate_over_hemi(L*f*cos_x) L = light coming from some direction
Currently in the code the bsdf is not used for such cases yet, but rather a lambert shader or phong is assumed.
Using the BSDF
Diffuse raytraced reflections now have the option to use the full BRDF or assuming lambert shading. When using irradiance caching however, using the full BRDF is no longer possible, rather the BRDF is executed once on each color channel, with the average incoming light direction (using the Dreamworks trick from "An approximate global illumination system for computer generated films").
Point based diffuse reflections assume Lambert shading still, using full BRDF's here would require a different, rasterization based implementation as in Renderman.
The raytraced reflection/refraction or environment map code has been been updated yet to the new system. Ideally this should use mat_bsdf_sample to sample outgoing specular vectors.
More Unified Solid/Ztransp
While rasterization of solid and ztransp samples into buffers still happens separately, they are now combined in a single pixel row and shaded together. This may save a few shading operations and avoid a few artifacts, but mostly simplifies the code. Previously writing to the render result was also done separately and inconsistent, using different filters. Unifying the rasterization may also be a useful thing to do in the future.