Tools and Techniques
Originally, MoCap started as simply a technology for translating filmed motion into 3D space. It then evolved in complexity and depth, spreading into different markets. In parallel, the CG and visualization tools evolved, and they are now at a meeting point. So, there are many different tools and techniques for using MoCap.
One of the biggest tools to use is the Internet. There are vast libraries of information about MoCap, and actual MoCap files out there and available for use. MoCap is not just humans; it is various animals, weather systems, and fluids. MoCap is not just used to make Gollum move around on the silver screen; it is used in medicine (joint injury), and sports performance (stroke mechanics). Motion can also be captured and studied to aid in weather prediction, auto body and suspension design, and ship hull design,
Issues and Expectations
First, I need to set expectations. The data in the MoCap file is just that: raw data collected from a real-world recording session. It is not the finished, polished, perfect, complete animation. Animation is still very much an artistic skill, and MoCap seeks to augment that ability by providing the artist with another tool by which to closely inspect real-world motion. Just as a good animator uses a mirror and reference video, so too can an animator use MoCap data to provide realistic movement. However, the state of the art is that it is NOT just a simple plug-and-chug exercise and out pops a finished product. MoCap is just another tool in the animator's toolbox.
There are several issues with MoCap data that I have seen:
- Volumne and quality of data
- The motion can be captured with one, two, eight, or 24 cameras, providing different levels of resolution and continuous data. Data and equipment may produce steady data, or the markers may jitter.
- Artistic skill
- The quality of acting varies widely around the world. The people digitizing the data may be professionals, or may be volunteers or hobbyists. The actors wearing the suit may be professional actors or may be some college freshmen from the Computer Science department. Artistic interpretation is an issue as well; the word “run” may vary between a jog in the park and a mad dash to escape from a pack of rabid dogs. If the actor was tired, or timid/embarrassed, or simply had a different action in mind, the performance may not be what you want.
- Actor size
- The size and bulk of the actor will not exactly match your model, unless you have specifically developed your model to match the actor, as was done in the CG movie "Beowulf".
- Marker set and placement
- The suit itself may provide a few markers or many. The data that is collected can vary from source to source, as each source uses a different MoCap suit and capture system. One source may use a suit with a marker in the middle of the chest, and another use a suit with a marker on each collarbone. When comparing imports, it can be very confusing as to what part of the body is represented by each marker and where it was placed for that recording session. Markers can be placed on different locations from session to session; for example the headband can be tilted and not span the crown of the head, or be slipped down lower toward the nape of the neck. This would give the impression that the actor was looking up, when in fact they were not.
- Session Duration and Frame Rate
- The capture session may have been short or long, and may have one or several takes to choose from. Therefore, both the kind, quality, and amount of data for a given type of motion may vary widely. For example, a MoCap file named "Running" may consist of 12 markers of information from some guy jogging on a treadmill for three seconds, to a 40-marker 30-second MoCap of a professional athlete running a zig-zag 100-yard obstacle course.
- Makers get missed
- Markers fall off during a recording session, or become obscured by the actor, clothing, or something on the set. Therefore, the data for that marker may "freeze" in place, or jump to zero / origin, or do something else, depending on the recording hardware and software. Therefore, you may have to clean the data.
- If the mocap data is parsed by the import routine as absolute locations, you cannot scale or rotate the orientation of the whole data set. This paper presents a process to work around and deal with that, though, because that is where the real utility of MoCap starts to come into play.
- Import code is not perfect
- In my experience with MoCap data files, the importer may have errors, so that even though the data in the file is good, the importer blows up. Even though the markers may be named something meaningful in the data file, the importer may name them "em_27" in the Blend file. This complicates trying to visualize and use all that data floating around in 3D space.
- Bones are inside the Empty cloud
- Translating the empty location to bone positions is also an issue. There are two ends to a bone - the root and the tip. In Blender, you position a root of a bone by the Copy Location constraint, or by extruding it from a parent's tip. You make a bone point/orient through an IK Solver constraint. Since the markers and the empties are on the outside of the mesh, and the bones are usually inside the mesh, we need a way to position and point the bones between the empties.
- The marker data is recorded in a volume space, and the recording lab specifies what "units" its marker data is recorded in. Unfortunately, the units of meausure are not saved in the C3D format, afaik. For example, the Punch-Strike CMU 02_05.C3D file produces a giant 80+ Blender units tall. this 80 does not correspond to inches, feet, meters, centimeters, or any other known unit of measure. Your scene may be scaled 1BU=1m, so you want an armature about 1.8BU tall, not 80.
This tutorial explains how to accomplish and/or work around many of these issues using Blender.
Approaches to Using MoCap
This tutorial focuses on the use of MoCap data to drive CG characters by an armature or skeleton system. There are three basic approaches when using a MoCap file for CG character animation:
- Use the MoCap data as a guide to help you in animating your rig
- Adapt the MoCap data to drive your rig
- Use the MoCap data to drive a new rig, based on the data points
Because the markers are placed on the outside of the actor, and because the actor is just doing "generic" actions, usually within a pretty confined space, you may want to just use the mocap data as a guide. In this case, you might want to put your own markers on your model (making sure you can move them to a hidden layer later on) and position your rig so that its mesh approximates the location of the empty for that frame.
Alternatively, you can use the actual empties to drive your rig. In this case, you assign a 1-bone length IK Solver contstraint so that the indicated bone points at the empty. You can delete any empties that you do not need.
An Approach to Building a Constrained Rig
There are two positions for a bone: Edit and Pose. The edit position is a location and rotation set when the bone was added to the armature in edit mode. The edit position never changes from frame to frame; it can only be changed by editing the armature. The edit position of a bone is derived from the basic human skeleton to match the size of the actor.
The pose position is determined by blending a pose interpolation (Ipo) curve and constraints. The Ipo curve specifies an exact location and rotation, and constraints influence that location/rotation (LocRot) to determine the ultimate pose at that frame. In using MoCap data to guide bone movement, you will have to use the location of the empties to guide bone's pose movement based on one or more empty locations. Bone pose movement is both location and rotation, and is thus derived from the location of the relevant empties at any given frame.
Blender provides automatic tools to compute these derived locations, and has tools which do this automatically on a continuous (even intra-frame) basis. The tool is called a Constraint. A constraint is applied to a bone, and influences some aspect of its pose. This restriction can be absolute, or relative to some other object. The Constraints that are applicable to our problem domain are:
- Copy Location
- Track To
- IK Solver - Target and Pole
To illustrate these three constraint concepts, let's do the following mini-tut-within-a-tut. You may perform this mini-tut with a new Blender session or download the Mini-Tut Rig and merely explore, or use the Rig to verify your results. The overall workflow for this step is:
- . Visualize and identify markers, choosing an origin position and frame for the Armature Object
- . Create the armature object.
- . For each bone needed or indicated to provide realistic representation of the creature,
- Visualize and create a reference point for the bone root (origin)
- Visualize and create a reference point for the bone tip
- Extrude or Add a bone of appropriate length and general orientation
- Constrain the bone to either a derived 3D-position or tip of an IK chain
- Select or derive a 3D point which determines the bone's orientation in 3D space
- Track the bone to face a desired orientation
- Constrain the reference empty to a 3D space location indicative of the bone's direction
- IK constrain the bone to the reference
- . Verify results by playing the animation, correcting any defects
Create Initial Rig
Start a new Blender session and use a workspace that suits animation (select SCR: 1-Animation from the User Preferences window header). In Front view (Numpad 1), at frame 1, add four empties (Spacebar->Add->Empty), positioned as shown in the picture to the right. To add each empty, in 3D View, select space->Add->Empty and enter its coordinates into the Transform Properties panel (press N to show this panel so you can enter the Number directly). Rename the empty as indicated below by entering the name in the OB: field in the Transform Properties panel.
- RFWT: (-1,0,2) - this empty represents the marker stuck to the actor's Right Front Waist
- LFWT: (1,0,2) - this empty represents the Left Front Waist marker
- LKNE: (1,0,1) - this represents the marker for the Left Kneecap
- LANK: (0.6,0,0) - this empty represents the marker for the Left Ankle
These empties represent the point cloud that is created when you import the MoCap file. Now let's build an armature to use this cloud. Add an Armature at (0,0,0) (spacebar->Add->Armature, and then make the Transform entries). Select and move the base of that first bone up to a Z value of about 2.1. Select the tip of the bone and move it to about (.5,0,2). Then press E to Extrude another bone from that tip downward to a Z of 1, and click to drop it. The tip of this second bone should now be selected, so press E to Extrude another bone from that tip downward to a Z of about 0. As you select each bone, you can name them "Hip.L", "UpperLeg.L", and "LowerLeg.L". The ".L" extension means "left" and Blender has some automatic tools for mirroring and renaming armature bones automatically if you use this naming standard. For this mini-tut, we will just work with these three bones. Tab out of Edit mode. You should have a Front view like that pictured to the right.
Front view represents the view of 3D space as if you were upright in a normal viewing position. +X is to your right, Z is up, and Y is toward/away from you. This is also called the Left Hand coordinate system; if you hold out your left hand with the thumb (Z) pointing up, your fingers point in the Y direction and your palm faces the X (or if you close your fingers half way, they point X). When an object is added, based on your user preferences, by default it aligns in the 2D space along the XY plane. When viewed from front view however, you would be looking at it end-on and not really see it. So, when added in front view, Blender assumes you want to see it, so it rotates it along the X axis to point up. You may have noticed a RotX setting of 90 in the Transform Properties panel. We should apply this rotation. From the 3D View header in Object mode, click Object->Clear/Apply->Apply Scale/Rotation, or just press CtrlA, and confirm. This action resets the object so that it does not have any rotations when we start, but is generally oriented the correct way.
Using Object Location Constraints
There are two kinds of locations: a location of an Object, and a location of a Bone. Both are called Location Constraints, but one applies to the overall armature (if the object is an armature) and the other applies to a posed bone within the armature. Now, when working with armatures, there are two modes: Object Mode and Pose Mode, and they are very different. An armature consists of many bones. Each bone may or may not be connected to another bone. Generally, you have one armature for a character, and that armature may have many bones; some are connected in a chain, and others are just free-floating. The whole armature moves around in the scene as a complete object. Bones move and shift position within the armature as the character moves.
Let's begin with the object location example. Add an Empty and call it "root". In the Buttons Object Context Object Subcontext, add two Copy Location constraints by clicking the Add Constraint button in the Constraints panel, selecting Copy Location from the menu. For the first constraint, copy the location of RFWT by entering RFWT in the Target: field. Leave the full Influence of 1.0. As soon as you enter RFWT in the Target field, you will notice that the root Empty jumps to the same location as the RFWT empty. Now add another constraint by clicking the Add Constraint button a second time, and entering Target:LFWT. Now the Empty root jumps to the other location. In the influence field, slide the slider to 0.5 (or click and enter 0.5). Now you have told the empty to start at RFWT and go half-way to LFWT, or half-way in between.
Test: For fun now, RMB click and G move the Empty LFWT up and down and all around. You should see the Empty root move half as much, almost as if there was a rubber band connecting the two Waist entries and the root was connected half-way between.
Now let's use that Empty root to constrain our armature hip bone. RMB select the armature and select "Pose" mode in the 3D View header by LMB clicking the up-down selector next to "Object Mode" and selecting Pose Mode. RMB select the Hip.L bone. In the Object:Object buttons, your constraints panel should be blank but be ready to add a constraint to Bone: Hip.L (if it says to Object:Armature, you are not in Pose mode). Add Constraint Copy Location and enter a Target: root.
Test: RMB click and G move the Empty RFWT up and down and all around. You should see the hip and leg bones move half as much, since the leg bone is connected to the hip bone by setting up an IK Target.
Now you can see that if the actor jumped up during the MoCap session, your armature would as well, copying every nuance of their movement. But what about when they turn? In top view, if you move the RFWT empty forward/backward in the Y direction, the hip bone moves, but stays facing forward. This is not good, because if one waist marker moves in front of the other, it means the actor was turning, and we want our armature to turn as well. If the markers themselves turned, we could copy their rotation. Unfortunately, the markers are just points in space and there is no way to copy their rotation. What we can do though, is to make the Hip bone point to the left waist marker.
Creating IK Targets
An Inverse Kinematic (IK) solver computes the pose of a chain of bones so that they point to the Target. In our case, we want our left hip bone to point to the left marker. Add a second Constraint to the Hip.L bone, and choose IK Solver as the constraint type. In the Target OB: field, enter LFWT. Instantly, the bone points to the marker. Since the leg bones want to retain their relative orientation to the hip bone, they swing outward.
Test: In top view now, as you move the right or left waist markers forward or back, the bone stays half-way in between, and points to the left marker. Because of the spacial and rotational constraint, our armature bones turn as well to reflect how the actor was turning.
The two constraints on the Hip.L bone should look like the image to the left. Now add an IK Constraint to the LowerLeg.L bone, to point to the LANK marker. As the LANK marker goes up and down, you can see the whole chain of bones, all the way back to the hip bone, bend and adjust to track the empty. Now, that is not the way humans are actually put together; our hip is actually one solid bone. So, we really only want the two leg bones to adjust, bending at the knee. So, change ChainLen: 2 and now the third bone back, the hip bone, does not move up and down. However, now we see that the knee joint goes crazy depending on where you move the LANK empty. We need to constrain that knee to point somewhere consistently. This is called Track Constraint.
Up until now, you have only been working in Front view, and if we were only using MoCap data for 2D applications, we would be fine. However, in 3D space, these bones need to twist and turn. In Blender, we call that Tracking. Tracking, and the Track To constraint, twists an object so that it faces another object. A common use of the Track To is to make a camera point to an object, so that if the object moves, the camera stays on it. In our case, we want this knee to point to the knee marker. If you stretch out your leg, you can see that you can rotate your upper leg in its socket, which makes the knee face inward or outward. In our MoCap data, there is a marker placed on the knee for that very reason, to tell us which way the upper leg bone rotated.
With the UpperLeg.L bone selected in Pose mode, in the Object Object buttons, enable Axis display. This allows you to see the orientation of the bone. In this case, my bone was oriented with the Y axis pointing backward, away from the IK Target, Z was pointing upward, and X was pointing to right in that left-handed coordinate system I discussed earlier. Therefore, I want this bone to face the knee empty using the Y axis, with Z being "up". Thus, create a Track To constraint with the target as LKNE, the To: as the Y, and Up: as the Z. This constraint is shown in the image to the right.
Test: Now, as you move the LKNE around the knee joint, the upper bone (and its child the lower leg bone) rotate to face the empty, perfectly mimicing a real upper leg bone rotating in its socket, and a knee joint connecting the two bones. As you move the ankle target LANK up and down, and position the LKNE marker appropriately, you get realistic kicking and dancing poses.
You now have all the fundamentals needed in order to use MoCap data to rig your armature and thus deform your mesh to mimic realistic motion. You know how to pose a floating bone in 3D space based on the location of one or more Empties. You know how to point a bone at another position in 3D space using the IK Solver. We have also covered IK Chains, and how Blender will solve the pose angles to meet the constraints you have imposed. For each bone in the armature, the workflow is:
- . Identify and rename Empties to participate as the IK Target constraints.
- . Add any Empties needed and constrain their Location as needed to control either the root location or the tip IK direction.
- . In Pose mode, for the root bone in the chain, constrain its Location.
- . point the bone to the IK Target by using the IK Solver constraint
- . Orient the bone face an intended direction by creating Track To constraint
|Process Leads Animation|
|Once this process was created, I was able to automate it. The C3D Import script can now recognize some marker sets, and can build a constrained Armature automatically. If the marker set is not recognized, you will have to build one manually :( However, the structure is evident in the code for you to add on that marker set to the code so that future files you process will also automagically create the armature.|
Blender's Action Editor
An action is transition of an armature between poses. Actions are generic motions, like a walk cycle. Motion capture sessions are centered around one or more actions by one or more actors. We use the term "primitive action" to describe a fundamental unit of motion, with an eye toward cycling and re-using that action. Blender provides the Action Editor as the primary tool to work with an overall armature pose. For example, the action team-hand-shake might have the following primitive actions:
Each of these primitives can be combined or used as needed in the final animation. The start and end point of each primitive action is denoted by a keyframe, which is the frame number within the action where the start or end pose occurs.
Blender's Pose Library
Blender 2.46 added on the ability to key poses of an armature within an Action. After building a library of poses for an armature, the animator can easily block in pose-to-pose animation by going to the frame where they want the armature to have a pose, select the pose from the pose library, and key it to lock it in at that frame. Since Blender automatically links each pose together through interpolation curves, the animator can rapidly assemble an animation without having to pose individual bones. Alternatively, the animator can pick a keyed pose from a baked Action, and use that as a starting or ending point for their animation. Either way, the animator is guaranteed to have a "natural" pose as part of their animation.
Blender's Non-Linear Animation
In making a final animation, the actions are blended together and arranged into a non-linear animation sequence. The script may call for two actors to shake hands for a long time (12 pumps, for example) in frames 1000-1200. Using Blender's NLA Editor, the animator then pulls in the team-hand-shake action, and extracts the clasp, then cycles through 4 iterations of the pumping cycles, and then the release primitive actions. The pre-clasp pose is blended to the clasp pose to provide a seamless transition into this action.