Skin Authoring Project
The project began with no particular skinning goal in mind. The first thing that we decided that we should do is make some authoring tool so we could create skins as necessary. The standard method to author skins is through a surface painting interface. From the beginning we thought that making a few improvements to the standard skinning pipeline, we could make it far less time consuming and painful for the skin author. The first improvement is that all mesh vertex distance computations should be in distance over the mesh, not in space. This would reflect what we want the meaning of vertices being ``close'' to be. With that in mind I started on the painting interface.
I read a paper entitled Adaptive Unwrapping for Interactive Texture Paint and got some good ideas about what is important in a painting interface. My eventual implementation is not really related to the system described in the paper, but ideas of what a paint stroke should mean and how the interface should provide feedback were a good guide.
The current painting interface includes replace, add, subtract, flood, and smooth current influence weight value. These tools only operate only on one influence at a time. There is also a rubber stamp like tool that copies all of the influences from an initial sampled vertex to the painted vertex. The author can select to only paint on front or back facing polygons. The author can adjust the current paint value, the brush radius, and the brush neigborhood value. The brush neighborhood is a geodesic distance threshold of how far away the next painted vertex can be.
The initial guess for the per-vertex weights of a skin, if reasonable, can take a huge amount of work off the skin author's shoulders. If there is no initial guess, the skin author would have to look at every vertex and explicitly say which joints should influence it, and by how much. We would like to offer the skin author a good guess, and allow them to tweak the results.
The algorithm that I use splits the problem into two parts. First, it finds which joints should influence a vertex, or the influence set. After it has the influence set, it defines the amount that each joint influences a vertex, or the weights, based on some falloff function that maps distance to a weight between 0 and 1.
For more details about the algorithm please refer to the Initial Weight Finding page.
Read more about linear blend skinning.
The current system can show any number of poses simultaneously. The author can save various poses to a BVH file as motion data and read them back in while working, the intention being that there are some number of poses that the author might want to see often while working.
I think that shoiwing multiple poses was a good idea, but more importantly I think that it led to the idea that we could explicitly show the possible positions of a vertex given its influence set and a pose.
Any vertex position in a linear blend skin is expressed as a linear combination of the vertex transformed by each joint's coordinate system. Read more.
If the weights are all positive, the vertex position can be expressed as a convex combination of the vertex transformed by each joint's coordinate system. Any possible position vertex position for that influence set will lie in the convex hull defined by that vertex rigidly transformed by each joint in the influence set. This can be thought of as the subspace in , of possible vertex positions. But, really, if you want to know more details, look at the paper.
After we could display the subspace we decided that it would be really good if we could scrap the whole painting thing and just tranform that vertex, using any traditional operation like translate, rotate, or scale, around in the subspace and compute the weights on the fly. It works pretty cool, and for more details look at the paper.
Download the Win32 binary with support files: skinpaint.zip
Read the User Documentation.
Luke Tokheim, <firstname.lastname@example.org>, May 13, 2003