This is a very cool paper. It returned back all the memories when in computer expos, vendors used to run morphing demos to demonstrate their monitors. Sweet memories. Anyway, the paper describes the basics of how to morph one 2D image to another. Clearly, the dumb method to do it is to use alpha blending. However, this gives ugly results, because features are not aligned, so we end up with ghosts in between frames. The paper goes ahead to describe several algorithms (or several ways to do the same idea). The basic idea is to warp the 2 images so that the features are aligned before doing alpha blending. There are several ways to do it. One method is to let the user segment both images using line segments. We do warping to align the corresponding areas before alpha blending. A pretty cool idea. Other ideas include plotting points and where they correspond to, drawing lines and where they correspond to, or drawing curves. The overall concept is the same, warp the 2 images to align them, and then do alpha blending. The other idea is transition control where the morphing doesn't happen uniformly. In this case, the warping can be so fast, but they blending is not as fast, or vice versa. This can lead to all sorts of nice effects. In general, we can let the user specify the speed of warping and blending. This is similar to using IPO curves when doing keyframed animations. Finally, the paper presents the idea of multi-image morphing where we use more than 2 images. For me, the idea looks a little bit fictitious to me. But, it still would have some applications, like forming faces in police departments for criminals. Overall, this was a good read. The algorithms are pretty innovative. The only downside to me is that the author jumps from giving detailed explanation on certain parts to not giving any explanation at all on others.