Introduction to Character Rig Design
In this section I'm going to show how character animation evolved from its earliest beginnings, into what it is today. This will help give you an understanding of the universal concepts and techniques of character control that are used today in 3D digital character animation. Then we'll delve into the more specific details related to the implementation of these techniques in Blender.
The Early Days
The first 3D characters started appearing in film between 1980 and 1985. In 1984, the rock group Dire Straits released a music video for their song, "Money for Nothing", which featured a pair of low-detail polygonal characters.
How to Make A Move
There was probably no such thing as a skeleton object in 3D applications back then, so how would someone animate a character if it has no skeleton? The solution was to use a series of multiple objects to approximate the shape of the character, and place them into a hierarchy(link).
This created a fairly straight-forward animation technique, they simply animated the rotation of each of these different objects. You can try this for yourself by creating a hierarchy of objects and then rotating them one at a time. This approach became known as Forward Kinematics, or FK.
A Skeleton Was Born
FK is a good process for generating movement, but we can't keep using multiple objects for all the parts of our characters because it looks fake. If we want it to look like something organic, the whole character needs to consist of only one mesh. There's no breaks in the skin around a human elbow, and so it should be with the mesh of a human CG character. So the skeleton object was created. It's an object that can be rotated, translated, and scaled, just like most other types of 3D objects, but it's special because we can use it to move the vertices of a mesh object. In Blender, this object is called an Armature. Note that Hooks are also objects that can move the verts of mesh objects.
Back to Kinematics
There was still another issue to deal with though. Consider two examples; an arm and a leg. An arm can be moving around in the air, but the leg needs to make contact with the ground. The foot needs to be completely motionless (relative to the ground, not the body) or any chance at the appearence of realistic movement is lost. If we want the character to move from standing to crouching, how do we keep the foot in one place while the knee joint and the hip joint are both rotating?
The solution to this problem was to create a new method that implements a recursive process which rotates the bones so that they reach toward a target. As long as the target is within range, the chain of objects will rotate as needed in order to touch the target. This process is called Inverse Kinematics, or IK.
When you work with FK, you rotate the character's joints yourself. When you work with IK, you move a target, and the bones are made to touch the target, so the rotations are all done for you. Sounds great, right? Who needs FK, right? Well, animators do, and the following image shows that IK and FK have very different behaviors.
Both of these arms have two key frames(link here), but the one on the right is animated with IK, and the one on the left with FK. For the arm on the right, this means that the IK effector has a key frame starting position, and a key frame ending position. To Blender, this means the animator is saying "go from point A, to point B", so obviously the effector is going to travel in a straight line. But is that what we want? Maybe it is, or maybe it's not, it depends on the character. If this is the arm of an olympic swimmer doing laps in a pool, then we probably want a movement more like what we see with the arm on the left. But if this is the robotic arms of a CNC machine that is cutting a perfectly straight line into a block of metal, then we probably want something like the motion of the arm on the right. Often, however, we need both IK and FK on the same character, but at different moments. Maybe our olympic swimmer is going to swim over to the steps, walk out of the pool and open a sliding glass door, in which case his hand needs to sync up with the handle of the door as it moves. We'll cover IK and FK blending later though.
Hopefully the information we've covered so far has helped you understand how character animation evolved from what it was, into what it is now. Hopefully this has helped you see what the methods and objectives are, and how the FK and IK techniques fit into the bigger picture that is character animation.
As you progress further and further into the topic of character animation, you'll see the terms "animation" and "rigging" thrown around a bit, but you might not completely understand the meanings and differences of these two terms. "Animation" is a couple things. For one, it's the specific act of simulating movement. On the other hand, basically everything in film, TV, and games, is done for the sole purpose of creating animation.
Often, when industry professionals are asked, "what do you do?", they will respond, "I'm a computer animator". This doesn't have to mean that they actually do the animating, but instead means that they work for a company that produces animations. In this regard, all subjects of computer graphics can be placed under the umbrella of computer animation.
This section talks specifically about the area in computer animation called "rigging". Rigging a character is like designing the controls for a puppet. There are puppets that dangle from strings, puppets that sit on hands, puppets that move with metal bars, and of course, all types of animatronic robot puppets. Even in computer graphics, we still need tools to help us create motion. This is the world of character rigging. Some of our tools are:
- Object hierarchies
- Axis locks
- Degrees of freedom
- Object constraints (and)
- Other types of object relationships
Just like real puppets, digital puppets need joints, and so we have skeleton-like, digital objects. We arrange these objects into hierarchies so that when a character's neck moves, his head goes with it. Or when the upperarm moves, the forearm is not left floating by itself, nor does it need to move on it's own to keep up. Because we make it the child of the upperarm in a hierarchy, it simply stays attached. And this goes for the upperarm as the child of the shoulder, and shoulder as the child of the spine.
Sometimes we have an object that works as a control for parts of our character, but we want to limit the ways it can be manipulated. For these situations, we have axis locks. This allows us to make some bones unmoveable, unrotatable, or unscaleable. There are 3 axes for each type of transformation, making 9 axes total, and we can lock as many or as few of these as we like to suit our needs.
In some IK chains, we want to limit the range of motion for certain bones. One example would be the wrist of a person, it should only rotate 180º. In Blender we can do this using DoF's, Degrees of Freedom.
The single most used tool of a rig is the constraint. It allows us to define other types of relationships that hierarchies can't give us. Besides constraints, we also have a few other special tools, such as IPO drivers(link), pydrivers(link), and python scripts(link).
--Wavez 15:07, 1 July 2006 (CEST)