From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

Dome Offline Rendering Thoughts

Thoughts on dome offline rendering

ZERO: They are a lot of workarounds out there to make fulldome (aka fisheye) renders in Blender. Each has its own advantages and disadvantages. This page holds ideas for a real solution though. In other words: what would need to be changed in the Blender code in order to achieve this.


FIRST: Blender doesn't have a real ray tracing rendering system. In other words: You can't cast rays in all directions as in Yafaray, PovRay, ... The solution is to use a stitching system.


SECOND: This is NOT a piece of cake. It does require a lot of code to make it work properly. Previous experience with Blender coding may not be mandatory, but for sure it would help to optimize the time invested on that.


THIRD: Those ideas came out of some personal research (I coded the BGE dome mode and start looking at offline solutions) plus a talk with Ton Roosendaal and some advices from Brecht after the Blender Conference 2009. The general idea here is to map the amount of work required for this task in order to help people willing to gather funding for this feature (or to code themselves).


THE IDEAS PER SI: (roughly explained - if anyone wants I can expend more time explaining any unclear topic)

Initial note: The render pipeline is likely to be changed a little by Brecht in the upcoming months [early 2010]. Nothing drastic but he'll probably move functions around to make the code more organized.

I don't know how much that would interfere with that. According to Brecht not so much, but this is hard to tell now [Blender 2.5alpha0].

We thought about 2 solutions:

a) to render two cylindrical panoramic and process then individually. They would contain all the scene pixel information. Cylindrical Panoramic images can be treated as regular images for effects such as blur, glare, ...

b) to render a cube map and the have a lookup method to use the pixels in spherical space (instead of the image space]. e.g. if you want a circular radial blur you will have a elliptical radial blur in cubemap space more deformed in the square corners I would assume we could even have cubemap images generated with FOV (field of view) bigger than 90º and if we have the circular origin in the seam between the cube images we do need a lookup method

  • ) in the end of the workflow, for both methods, we would need warping nodes to convert cubemap to different dome formats (small detail: including for spherical mirror)
  • ) I would like to see b way more than a. Indeed Paul Bourke thinks that Cube Map should be the standard format for fulldomes instead of fisheye fisheye means a lot of precision loss, right? fisheye means you have a problem if your dome is not exactly 180º cubemap is more "neutral".

"the pixel to screen ratio in fisheye is less constant accross the screen than with cubemap, no? hmm I wouldn't know that out of the top of my head but P.Bourke strongly advices for the cubemap"

  • ) Brecht told me that the actual convolution (blur, glare, ...) pixel lookup code is not unified. Every function tend to implement its own lookup code. Since we would need our own lookup codes, we (whoever code it or Brecth) may need to organize that first.


"that would help with overflow too how to deal with borders and all extend, mirror, constant color, ..."


-- For any questions, suggestions, ideas ... feel free to contact me and I expand this page.


Dalai Felinto - November 2009 dfelinto@yahoo.com