[spacer] [spacer] [spacer]

CS 838 Project 2 Docs : Motion Clip Blending

[spacer]

Motion clips are rarely complete. They only perform some part of what you want your whole animation to be. Therefore, it is advantageous to blend clips together. The technique outlined here is based on the technique for blending outlined in ``Motion Graphs'' by Lucas Kovar, to appear in ACM SIGGRAPH 2002.

Finding a Blend

Given two motions A and B we want to find the frames A(i) and B(j) such that the distance between the poses in the two frames are minimized. The notion of pose distance is defined as a norm in a high dimensional space of joint locations. In particular, a pose PA(i) is a vector in this space. Specifically it is a vector concatenation of the 3 vectors corresponding to joint locations. i.e. P = [head_x, head_y, head_z, neck_x, neck_y, neck_z, ... ]. In practice, we want to consider closeness over a window of frames, but that can be handled trivially by concatenating other frames to make a larger vector. i.e. we have Window(PA(i)) = [PA(i), PA(i+1), ..., PA(i+k)] Then we define our closeness of a specific tuple to be


However, in practice this would yield few ``good'' blends, because there is a rotation in the XZ plane that we have free to manipulate. It is unlikely that clips would be aligned. Therefore, we alter our distance definition to include a transformation to rotate and translate the second clip to be as close as possible (in the least squares sense). We can compute these angles and translations using the closed-form solutions provided by Kovar:

Thus we have:

Once we have computed these values we can find


However, in principle, there exists multiple local minima, and providing the user a list of these minima or a graph (as we do) is more flexible. We generate an image that shows the goodness of a particular tuple. The user can drag and be constrained to particular points or can freely choose a point. A sample of a blend graph is shown below:

Doing the Blend

Once we have selected a tuple (i,j), we need to actually perform the blend. My system creates a dynamic link between the clips and provides a strobe output to show how the blend will occur (the blend frames are in blue). Once the user is happy with a blend, he can "flatten" it to a normal motion that can be exported or manipulated just like a regular motion.

The actual blend is performed by using a function to drive an alpha interpolation factor. This factor will smoothly go between 0 and 1 at the interval of blending. This alpha drives a quaternion SLERP for all rotation channels on the skeleton and a normal linear interpolation for the skeletal root translation. Below, you see the blend frames shown in blue. The green is the start of the new clip and the red is the end of the new clip.

Finally, we have a video clip of an example blend between some walking motion and a falling death motion:

[spacer] [spacer] [spacer]

Last updated May 10, 2002. Andrew Selle  

[spacer]