We can make a distinction between 3 types of motion blur: (Camera and Object motion blur are implemented and working)
- Camera: this is the easiest to implement, and means we have to interpolate the camera value and parameters. Mainly we're missing the utility functions to decompose, interpolate and compose transformation matrices.
- Object: object level transform would be the next step, also interpolating matrices. Some extra work here would be in ensure the bounds for BVH are correct, and trying to keep register pressure low for BVH traversal. Fast moving objects may present a performance problem
- Vertex: moving mesh vertices is most complicated, and this is the case you need for animated characters. The simplest solution would be to add a moving triangle primitive. For fast moving triangles there may again be a performance problem. Oriented bounding boxes and/or interpolating BVH node bounding boxes may give big speedups in such cases.
On a related note, we should be able to support animated node input values as well. A generic "animated value" node linked to such node inputs could interpolate the value. A similar thing could be done for mesh attributes, though this would be more complicated.
(currently in development for Blender 2.67)
For subsurface scattering, there is already a BSSRDF closure. The first implementation should be simple raytraced multiple scattering following the original BSSRDF paper. That means that in addition to a direction sample, we also need a position sample, so two extra dimensions for the random number generator.
Brute force volume multiple scattering rendering would also give SSS, but native BSSRDF supports seems much more efficient. Point based SSS may be a step too far, this goes in the direction of shadow maps and similar algorithms, which are not ideal for fast previews. However BSSRDF with raytracing also has some quality issues compared to point based methods, for which we don't have a solution yet.