Google summer of code: Camera tracking integration
NOTE: Google Summer of Code program was finished and camera tracking project was merged into trunk. Information here might be outdated, more recent information can be found here
This is homepage of my Google Summer of Code project called "Camera tracking integration into Blender".
Here are some links to pages with more detailed info:
Current workflow state
- Load image data - video or sequence of images.
- [Optional] Input known camera parameters.
- [Optional] Provide hints to camera lens undistortion solver (identify straight lines in the image) to help with undistorting the image.
- [Optional] Undistort the camera lens using a distortion solver.
- [Optional] Create mask for data you don't want tracked.
- [Optional] adjust color/contrast/etc so that feature points to track have increased contrast and are thus easier for the tracker to find them.
- [Optional] select specific features you want tracked (either by mask or placement of tracking points). (there's feature selection, but no masks)
- If you specified specific features to the tracker you may also want to specify how the points should be tracked (bounding box for the tracker to look for the point in; whether the object being tracked is.
- «Rigid» or can deform, and knowledge of the types of camera/object motion - object translation; rotation; camera translation; rotation).
- Send image data, specified track data; camera data; and mask data to tracker.
- Tracker library can automatically identify feature points to track and or use user specified features, ignoring areas that are masked (no masks support yet).
- Do a «solve» of the camera motion based on the track points, including statistics on how 'good' the tracks are in contributing to the solve.
- Return track point data and camera solve to software, including the statistical analysis of the track points.
- Based on the statistical analysis pick error thresholds for what track points to automatically delete.
- [Optional] Manually delete any track points.
- [Optional] Create a mask to 'hide' unwanted track points.
- [Optional] Mask can be assigned to follow a set of track points to automatically mask a moving object from the tracker/solver.
- [Optional] mask can be manually keyframed so that it moves and deforms over time to mask a moving object from the tracker/solver.
- [Optional] Provide a manually created camera curve to 'hint' the tracker/solver what you expect the actual curve to look like.
- Retrack if additional tracker points are now needed.
- Pass the tracker points, camera hints, etc. to the camera solver.
- Return the solved camera track and the 3d location of the track points to the software.
- Visualize the camera motion and track points in the 3d view.
- Define a ground plane reference to the 3d track points and camera.
- Define the scene origin relative to the 3d track points and camera.
- Define the world scale and orientation relative to the 3d track points and data.
- Add a test object into the 3d view and see if it stays in the proper location.
- [Optional] Stabilize the camera view based on the solved camera track.
- [Optional] smooth the camera track curve.