From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

Blender Stereoscopic workflow and code proposal

BMW model courtesy of mikepan.com

Motivation

This is a technical document explaining the features planned for enabled stereoscopic render support in Blender. Proposal developed in collaboration with Francesco Siddi.

We look forward to hear back from artists and developers. Feedback can come on #blendercoders - bf-committers mailing-list or email.
Dalai Felinto (dfelinto on irc)

Related Links

  • 3D Movie Making by Bernard Mendiburu (book) [1]
  • Cinema Stereoscopico by Francesco Siddi (book - Italian) [2]
  • Blender 2.6 Stereoscopic Rendering Addon by Sebastian Schneider [3]
  • Light Illusion - (floating window and other related topics) [4]

Introduction

Stereo-3D has been explored for years by the movie industry. The well-known principal is to produce two renders per frame (one for each eye). To produce those images we need two new parameters for rendering: the interocular separation and the converging plane distance.

The images can be produced in 3 different ways: (1) off-axis; (2) toe-in; and (3) parallel cameras. Off-axis is the ultimate goal of the digital movie-maker, but the other two methods can be needed when working with combining rendered elements with captured footage.

However the corollary for 3d-movie making is that the result has to be more than rendering two images. Depth narratives are a complete new subject and have their own needs in terms of sets of tools and workflow. Thus once the images are rendered, it's essential to let the artist to check the result in their 3d enabled display. That means to enable the UV/Image Editor with stereo preview

The implementation of Stereo features can happen in two separate (and almost independent) fronts. Viewport and Rendering. Viewport takes care of the workflow during the edition of the 3d scene, allowing for the preview of depth bracket and previsualization of the 3d stereo effect. Rendering includes saving the files, integrating them into composite nodes, and preview of the rendered result.

Viewport

3D viewport is important for previewing layouts in camera view. This only makes sense when the view is on camera mode. In 3D viewport a stereo camera should be properly visualized (pair of virtual cameras, stereo window and its limits: close plane, zero plane, distant plane limits).

We should be able to see both eyes, or only the left or right eye.

When seeing both eyes, different setting for stereo visualization in the 3D viewport could be available (red-cyan and other anaglyph, interlaced, etc.) (Display Modes)

Interface Preview

model courtesy of patazstudio.com

Implementation Considerations

All the Display Modes require the viewport to be rendered twice. The simplest modes (anaglyph, interlace, v-interlace) can be accomplished by setting different OpenGL flags in each render pass (as reference see the game engine file RAS_OpenGLRasterizer.cpp).

For other modes (anaglyph desaturated, anaglyph partly saturated) we may need to render the viewport to FBOs and do a GLSL screen shader or equivalent. (or we can simply not support those methods at first).

For side-by-side and quad buffer the most elegant solution would be to render the entire viewport twice, each pass with a different value set for the current eye. The Triple Buffer Window Draw Method may already be doing that - to investigate. To me this is more like having Blender supporting side-by-side/quad-buffer and then enabling viewport per-eye render, than the other way around.

It's important to test how eventual changes in the z-buffer affect Viewport selection. For the simpler modes we can leave the z-buffer of the last rendered eye, for the screen shader options we may need to draw the screen plane without changing zed.

DNA

/* Stereo Settings */
typedef struct View3DStereoSettings {
    short flag;
    short eye;
    short mode;
    char pad[2];
    /* to exagerate the depth - default 1.0 */
    /* still have to decide if we need that    */
   //float factor;
} View3DStereoSettings;
 
/* View3D->stereo.flag (short) */
#define V3D_S3D_DISPCAMERAS (1 << 0)
#define V3D_S3D_DISPPLANES (1 << 1)
#define V3D_S3D_DISPVOLUME (1 << 2)
#define V3D_S3D_DISPFLOATWIN (1 << 3)
#define V3D_S3D_LEFT (1 << 4) /* (internal) */
 
/* View3D->stereo.eye (short) */
#define V3D_S3D_EYE_BOTH 0
#define V3D_S3D_EYE_LEFT 1
#define V3D_S3D_EYE_RIGHT 2
 
/* View3D->stereo.mode (short) */
#define V3D_S3D_ANAGLYPH 0
#define V3D_S3D_INTERLACE 1
#define V3D_S3D_VINTERLACE 2
#define V3D_S3D_QUADBUFFER 3

Camera

The Camera datablock will hold the information per-camera of the stereo settings (interocular separation, convergence plane distance and convergence mode (toe-in, off-axis, parallel).

Later we should be able to expand this functionalities to let the users to work with pixel separation, instead of directly with interocular/plane distances (for reference see this addon). Internally however, we are always changing interocular separation and the convergence plane distance. (which also means those extra functionalities can be achieved via RNA).

Interface Preview

Dev-Stereoscopy-Interface-Camera.png

The Stereo options should have their own panel, to be grayed out when stereo is disabled in a scene level. (or perhaps we expose the scene.use_stereo here?).

Floating Windows allow for cutting the non intersecting area of the camera frustums.

Later this can be extended to accommodate the workflow functionalities such as auto-setting of interocular and convergence parameters based on a depth bracket. This doesn't need to be built-in and can be implemented as an addon to allow studios to change it more easily.

DNA

/* Stereo Settings */
typedef struct CameraStereoSettings {
    float interocular_distance;
    float convergence_distance;
    short convergence_mode;
    short floating_window_mode;
    float floating_window_distance;
 } CameraStereoSettings;
 
/* stereo->convergence_mode */
#define CAM_S3D_OFFAXIS   0
#define CAM_S3D_PARALLEL 1
#define CAM_S3D_TOE          2
 
/* stereo->floating_window_mode */
#define CAM_S3D_FW_NONE 0
#define CAM_S3D_FW_LEFT   1
#define CAM_S3D_FW_RIGHT 2

Scene

The scene panel is where stereo get enabled/disabled for a scene. Apart from this flag we need an enum to pick between Left, Right or Both eye renders. When stereo is enabled, the Output panel will now show an option to pick a left and right folder name/output.

This control all the other stereo options (but the viewport). So the camera stereo options and others are grayed out unless this panel header is True.

Interface Preview

Dev-Stereoscopy-Interface-Scene.png

DNA

/* path to render output */
    char pic[1024]; /* 1024 = FILE_MAX */
+    char pic2[1024]; /* 1024 = FILE_MAX */
/* Stereo Settings */
typedef struct SceneStereoSettings {
    short eye;
    short pad[3];
} SceneStereoSettings;
 
#define SCE_S3D_BOTH 0
#define SCE_S3D_LEFT 1
#define SCE_S3D_RIGHT 2

Render

When stereo render is on (see Scene) Blender may render two buffers. Internally it may be stored as one bug buffer with one image followed by the other in the image space. Or it may be two separate buffers. To investigate, how the "Panorama" render in the Blender Render Engine works. Either way, we need a new flag to tell the image buffer that this is a Stereo Buffer. This is particular important for the Compositor and UV/Editor components.

UV/Image Viewer

The Image Viewer need to be expanded to account for the 'Stereo Buffer'. We need to show the result using a selection of Display Modes. Side-by-Side mode, should follow the implementation of Viewport, with the screen rendered side-by-side and each image viewer showing the render buffer corresponding to its current side/eye.

Whenever the current buffer is a "Stereo Buffer", spacebar will toggle between mono and stereo display. When in the mono mode the artist can switch between left and right with L/R keys.

Optionally (to be decided) It should also have the ability to save as the current view (with the filter on, for example the anaglyph).

Composite Nodes

As mentioned in Render we will now have a Stereo Buffer. This will be compatible with (almost?) all the nodes that the color socket can be used with (the yellow socket). The difference is that the Stereo Buffer will have the node computation running for its two internal buffers.

For performance reason, most of the time when the artist is tweaking, she only needs for one buffer to be calculated. A global (per Node Editor) option can help limiting the composite to one eye only (left). This, however shouldn't affect the actual render, that should always composite both eyes (unless set otherwise in the render panel).

An option is to re-use the settings as set in the Scene panel. To be decided.

Viewer Node

When a Stereo Buffer is connected to a Viewer Node we want to show one of the eyes (left likely) in the backdrop. But if the Viewer Node is currently visible in a UV/Image Viewer we need to composite both eyes.

Output Node

We either need to adapt the Output Node to something similar to the Scene Left Eye/Right Eye folders or create a new output node. To be decided.

Join Image Node (NEW)

Join Image Node should let the user combine two image inputs into a Stereo Buffer.

Split Image Node (NEW)

Split Image Node splits a Stereo Buffer into two individual image buffers.

Stereo Adjustment Node (NEW)

Stereo Adjustment Node is intended to adjust the depth of a set of Stereo Buffer. The input has to be a Stereo Buffer. It then offset the stereo pair symmetrically in X. The operation is in pixels. This can be used for multi-rig setups.

Sequencer (VSE)

Not thought deeply. But some thoughts from Francesco: " I think that the OGL preview available when you add a scene strip to the VSE should output stereo images based on some output settings (...) [to help] you control the stereo budget for the whole production. Maybe it would be even better to think in advance a system that would allow you to do stereo tweaks from the VSE and have them saved back to the shot file?"

On top of that I must add that some cutting effects have to be treated differently when in stereo. (e.g. side cut). I don't want to push a lot of feature creep into blender only to support stereo so I have mixed feelings here.

Hopefully the VSE can be thought later, once everything is implemented.

Display Modes

  • Anaglyph
  • Interlace
  • V-Interlace
  • Side-by-Side
  • Quad-Buffer

Pitfalls

Features not covered in this proposal:

  • Dynamic Floating Window (see paper from Brian Gardner)
  • Multi-Rig Cameras (though Stereo Adjustment Node can be used for that)

Roadmap

When can we see that in Blender? I honestly don't know. This is something I have personal/professional learning interest on and I want to help pushing it further. That also means I'm currently planning on tackling it mostly on my own time with no external funding, so don't hold your breath.

That said, once the design is approved by other Blender developers the data structure can be committed and any interested developer can help in. A lot of the work here can be tackled in parallel.

I want to help

If you are a 3d artist experienced with stereoscopic please wave in and drop your 2 cents. If things go well at some point I may put together a fundraise to get equipment (the Sony Playstation 3D display comes to my mind). But for now I want to get the code decisions wrapped up and get some coding rolling.

If you want to help with code, get yourself familiar with github and get started ;)