From BlenderWiki

Jump to: navigation, search
Note: This is an archived version of the Blender Developer Wiki. The current and active wiki is available on wiki.blender.org.

Week Progress:

The first draft pipeline of multi-view camera is about to finish. I have basically migrated necessary components from libmv/simple_pipeline to libmv/autotrack to do the multi-view reconstruction. Specifically, here is what I did this week:

  • Convert correspondence track indexes into a set of global indexes, according to my mentor (Sergey)’s suggestion. The global track index is constructed by first indexing all tracks linearly, and then turn the second track index of a correspondence into the first track index of the correspondence. In this way, we can go over correspondence list only once. The global index would be passed to libmv.
  • Working on the multi-view pipeline. The workflow is similar to that of the single camera case, cause my goal is to first make multi-view stuff work before refining it. I rewrote some key gradients, such as track intersection and camera resection, based on the new mv::Reconstruction and mv::Tracks.

What I will do next week:

Unfortunately, I still have some bugs in the multi-view camera resection to solve. For example, the track intersection and camera resection is not working correctly as I expected it. So there will be heavy debug work next week. My goal is to get a successful multi-view reconstruction and pass back the results to blender, by the end of next week, which is in accord with my project proposal.

I haven’t thought about how object solver will fit into the multi-view pipeline, by which I mean the modal solver. Projective bundle is also not considered since it seems that it is not used in the current reconstruction pipeline, right? The multi-view pipeline still relys on some of the gradients in simple_pipeline, like libmv::CameraIntrinsics, which might be moved to autotrack if we are going to abandon simple_pipeline.