From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

Week 9

What I did this week

  • Commited some motion blur code to my branch. (r59183). We can now enable/disable motion blur on a per object basis, inside the Properties Editor. This commit also has some basic code for the motion multiplier, but I still need to properly hook that up with the object transform matrix and shutter time.

Questions and observations

The current model gets calculated within Cycles itself.

SVM: Pre-calculation in nodes.cpp, final evaluation in the SVM file.

OSL: Has everything in the .osl file, but as the sun_direction and turbidity is constant, it can only calculate that once and optimize it out then for the actual rendering. (OSL makes heavy use of constant folding etc.)

The new model on the other hand comes with a 66kb large dataset with values (ArHosekSkyModelData_RGB.h). (See Sample code archive.) Other engines all use this data, so I guess we need it too. But I guess we should use that to precalc the sky again, and not increase kernel size. What confuses me is that "ArHosekSkyModelState" which we need to initialize. (As we use RGB, we would start with "arhosek_rgb_skymodelstate_alloc_init").

I checked on some implementations of the new model.

  • Simple implementation which calculates every pixel and outputs an image.
  • Mitsuba renders the Sky to an HDR map internally and uses this.

So basically I would appreciate some starting points on how to approach this. Use the model data? Separate code into precalc and svm evaluation again?

Next week

  • Continue with the Sky feature. I would also like to spend some time documenting some of Cycles code, which is not so trivial to understand (at least for me). :)

Questions

See above, regarding the new Sky model.