Nicholas Penwarden
penwanATcsDOTwiscDOTedu

Problem

How to animate a skinned character under human control in real time.

Results

I have achieved my stated goal of animating a skinned character in real time under human control. The result is a demo application which displays two characters on the screen looping an 'idle' animation. By moving the analog control stick, the character will run in the desired direction. The speed of the character is controlled by how much the analog control stick is moved. Additionally, the character will perform a punching animation when a certain button on the controller is pressed. Another button on the controller switches which character is controlled by the game controller.

Smooth transitions are not used between animations. Instead I simply cut from one animation to another. Because of this, the motion can seem very unnatural during transitions.

Methods

Motion

The motions I used were motion capture data read in from BVH files. These BVH files were provided from the UW Graphics Group's CVS repository. Available motions included a variety of walking, jogging, sprinting, and strafing motions as well as several minutes of a boxer being captured.

I used a very simple motion graph for controlling the animation. In fact, it is essentially one node with several edges (representing motions) pointing back at it. As I did not concern myself with smooth transitions between motions, it was easy enough to string together motions in this way without worrying about going through a common pose or finding good transition points.

Skinning

I used the method commonly known as linear-blend skinning or SSD (skeletal subspace deformation) to perform skinning operations. The meshes used as skins were, like the motion capture data, provided from the UW Graphics Group's CVS repository. These meshes included joint weights for each vertex, relieving me from the burden of determining these weights.

Unfortunately, dress pose (i.e. the pose in which the mesh was designed) did not match the zero pose of the motion data. This presented a huge headache. To fix this, I performed the following steps:

  1. Created a skeleton representing the dress pose of the mesh.
  2. "Dressed" the mesh. What I mean by this is that I calculated the transformations of each vertex in the mesh relative to the weighted composition of the transformations of all joints in the dress pose skeleton.
  3. Applied transformations to the dress pose skeleton such that is matched (or nearly matched) the zero pose skeleton.
  4. Calculated the global position of all vertices in the mesh relative to the transformed dress pose skeleton.
  5. "Dressed" the mesh in the zero pose skeleton.

This resulted in the mesh consisting of a set of vertices with proper transformations relative to the joints in the zero pose skeleton. This is very hackish. Step 1 was done manually, but I think that is to be expected. Step 3 was done manually -- this is the most hackish step, and something that should be automated. The rest was, perhaps, unnecessary. I could have stored the other transformations and simply composed them all at run time. I chose to precompute them because I think it is conceptually easier and less computationally expensive at run time.

Input

I chose to use Microsoft's DirectInput library (part of DirectX) to accept input from a game controller (in this case a PlayStation controller attached to a Windows XP PC's USB port). I found the DirectInput library to be simple to use for using game controllers and very well documented in the documentation provided as part of the DirectX SDK.

What I Learned

  1. How, exactly, the orientations and offsets in a heirarchical skeleton are stored and used. Specifically, I was originally under the impression that each joint's orientation was applied to its parent's coordinate system before the joint's offset. I quickly learned otherwise, but now know exactly how the system works.
  2. The difficulty of retargetting motions to characters of different portions. One of the characters I used (the "Army Guy") has proportions quite different from that of the performer in the motion capture data. While I think the motion generally looks "good" it certainly doesn't always seem "right."
  3. The difficulties and problems inherent in dealing with motion data and visual data that are not developed with the intent of being used together. Not only does this result in the problem stated above, but it results in the dress pose not matching the zero pose. This turns in to an annoyance in trying to make the data match.

Self-Evaluation

I am happy with what I achieved given the time that I had and that I have had no experience in the field of computer animation prior to this class/project. Although I had to write a tokenizer and some parsers for the loading of motion and skin data, it was fairly simple. It was particularly helpful that the data is stored in a human-readable format. The implementation of linear-blend skinning was simple.

Several details held me back from the otherwise simple nature of this project. The worst was the mismatch between the dress pose of the skin data and the zero pose of the motion data. This required a lot of hand tweaking, trial and error, and plain old hacking to get right. Either working with data made to work together or having access to a library to deal with this issue would have saved an incredible amount of time.

Additionally, a lot of time was spent on just "laying groundwork" so to speak. Writing parsers for the data, getting the data to work together, etc. With this groundwork done and in place, working on top of it will be much more productive.

In general, things worked out well for me in this project following the plan outlined in the proposal. My only advice to another student thinking of trying a similar project in the future would be to reconcile the skeletons in the skin and motion data early on. I made the mistake of not thinking it would be that hard. I would certainly recommend working with data designed for each other if possible, and to work with a library to alleviate this problem otherwise. (I suppose that may fall on me for the next project ;)

How this project will affect the next project

I really enjoyed working on this project and am happy with the results. As such, it has only solidified my plans to work on something similar for the next project. Specifically, I am considering the problem of how to properly orient the feet of the character when navigating over uneven terrain while being controlled by a human in real time. I may also work on a more automated library/piece of software to reconcile skeletons between skin and motion data.