From BlenderWiki

Jump to: navigation, search


The purpose of this tutorial is three-fold:

  • provide a high-level workflow (life-cycle process and best practices) for using motion capture (MoCap) data in Blender
  • supply and present Blender tools (scripts coded in Python) which are used at critical junctures in the process
  • illustrate an example, providing step-by-step instructions using Blender to blend hand animation and mocap animation into a scene

This is not an academic research paper; the body of work in using geo-spatial data is huge and citations would fill a whole page. This tutorial builds on the work of others in the Blender community, notably Campbell Barton and Jean-Baptiste Perin, and presents new scripts that extend Blender. The audience is expected to be an animator of medium to low skill, and comfortable with the Blender interface. I will first present the overall process, to say WHAT we are doing and WHY, and then examine a test case, providing step-by-step instructions for applying one MoCap data set to one armature. Future releases of Blender hold the promise for more tools, integration, and workflow improvements.


Up until now, using MoCap data in Blender was very limited and of low value to the Animator. Simply getting something useful was often more trouble than it was worth. However, with some new scripts and features in Blender 2.46, MoCap can be a very valuable tool in creating superb animation. You can easily build a library of reusable actions, poses, and IK targets based on real-world motions. These actions and IK targets can be scaled, moved, and rotated to fit your needs in the final animation, or just studied to improve your animation skills. Many subtleties and organic movements can be gleaned from watching these sessions.

A famous example of motion capture is Gollum in the movie "Lord of the Rings". Although the Gollum you see on screen is an entirely digital creation, his movements are those of the actor Andy Serkis. Serkis wore a motion capture suit covered in reflectors which were tracked by 24 cameras. His physical performance was then transferred onto the computer-generated Gollum model. The animation of the Gollum character is an outstanding example of a great actor providing an excellent basis for animation through motion capture. The goal of this tutorial is to enable you to establish a reusable library of ready-made realistic actions which you can use in animating your character.

This tutorial focuses on a small subset of geo-spatial 3D datastreams, specifically those captured by motion capture systems filming human actors. In a MoCap session, markers are placed on the actor's body and filmed by multiple cameras. The 2D images are then analyzed and by a form of stereoscopic interlacing, their 3D location is determined, relative to some geographic center as contained in some 3D space. Units of measure are applied to an XYZ axis system and we have a geo-spatial coordinate system. The time value measure becomes a frame, as each camera records an image at the same capture time, and these images are synchronized to provide an accurate 3D coordinate at a precise time. Knowing the frame rate allows us to determine, relative to the start of the filming, where and when a marker was at that time.

This tutorial presents a step-by-step example of using this motion capture, or MoCap, data in Blender to digitally reconstruct the skeletal motions of the capture subject, in such a manner as to be able to isolate individual actions. Further, this paper then goes on to demonstrate how these individual actions can be combined in different ways to effect a new desired motion. The armature is skinned by a mesh, textured, and rendered performing the action taken from real-life actors. You can blend hand-animation, pose-to-pose, keyframed and IK targeted animation all in Blender. This process both simplifies the job of the animator, increases the quality of the resulting product, and increases productivity.


Due to the length of this paper and the medium in which it is presented, it has been split into the following sections (click on the link to visit a section directly, or use the navigation header at the top to progress to the next/previous page):

  1. Section 1. Life-Cycle_Process
  2. Section 2. Tools and Techniques
  3. Section 3. From Importing through Constrained Rigging
  4. Section 4. From Baking through Integration


In this paper I have presented a detailed workflow for integrating motion capture into the animation process. Other fields of art and engineering have standardized process, developed libraries of reference materials, and make use of re-usable standard components to increase quality, reduce cost and time to market. While the field of animation is still an art form, it can benefit from the use of these engineering techniques. The animator can now use Blender as an animation library store.

Blender provides a very comprehensive and integrated tool set which addresses all of the workflow post-digitization. If the reader has exercised the tutorial, then they have a rig which can readily be adapted to use any of the Carnegie Mellon Motion Capture C3D files, and they can start building their libraries of re-usable actions. Any library which uses the same suit and marker setup can be processed using this same rig, and the rig can easily be adapted to fit other suites and marker sets and systems. Since Blender is free, the software can be distributed with the motion capture cameras and digitization software, providing a robust suite of full life-cycle tools. Additionally, hardware providers can move into the services and content sales space. At some point, a fully integrated platform for animation will emerge.

Future work includes revising the import script to make them more robust, for example, using names of markers in the metadata to name empties, and possibly automatically creating the floating empties and ik_targets. For certain suits, a re-usable rig has been developed to follow a generic set of empties. Thus, one need only assign the IPOs to each empty, and the rig adapts to that motion automatically. Different rigs for different suits and marker sets are to be developed, as well as synthesizing the BVH rig and the C3D rig to provide a uniform library. The baking process and act of discerning re-useable movements. The other import script for BVH, and its adapation to this workflow needs to be explored and documented. Further, research into the robustness of datastream converters should be researched. Lastly, library and repository management, especially as it pertains to digital asset management, needs to be addressed, possibly through synchronous access to a data base. The application of the workflow and tools to other modeling and simulation tools is another avenue for exploration, as is the application or extension of this process to other applications, such as fluid modeling with practical applications in weather tracking and industrial design industries (such as turbines, automotive, and ship hull).

Now that Inverse Kinematic (IK) targets and bone Actions can be derived from MoCap data, re-targeting (a process of adapting a skeleton to a skin) can make that armature deform a skin. Once adapted, the skin is given an armature modifier (specifying the modified armature). I have demonstrated that the Armature can be edited (arms lengthened, legs shortened) will little or no adverse effect on the animation quality. Since the skeleton has a huge library of Actions, the new skin (character, avatar) can instantly perform all those Actions.


  1. Carnegie Mellon University
  2. Blender
  3. Python
  4. Blender's Python API
  5. C3D Import Script
  6. BVH Import Script
  7. Animation Bake (created forward-keyed bone movement based on IK (as well as any other constraints) rig)

For any questions or comments, you can contact the author through his wiki page.